Two Theories of Implicatures (Parikh, Jäger) Day 3 – August, 9th.

Slides:



Advertisements
Similar presentations
1 Signalling Often a player wants to convey his private information to other players. It might be information about his own payoffs (My costs are low,
Advertisements

Social Networks 101 P ROF. J ASON H ARTLINE AND P ROF. N ICOLE I MMORLICA.
Game Theory Assignment For all of these games, P1 chooses between the columns, and P2 chooses between the rows.
ECON 100 Tutorial: Week 9 office: LUMS C85.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Cheap Talk. When can cheap talk be believed? We have discussed costly signaling models like educational signaling. In these models, a signal of one’s.
Review Exercises 1) Do the COMPONENTIAL analysis (not the compositional one) of the following words: hen b) rooster Componential analysis 2) Does ‘A’
Topic 10: conversational implicature Introduction to Semantics.
CAS LX 502 Semantics 10b. Presuppositions, take
The Cooperative Principle
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Abby Yinger Mathematics o Statistics o Decision Theory.
What is a game?. Game: a contest between players with rules to determine a winner. Strategy: a long term plan of action designed to achieve a particular.
On Status and Form of the Relevance Principle Anton Benz, ZAS Berlin Centre for General Linguistics, Typology and Universals Research.
ECO290E: Game Theory Lecture 4 Applications in Industrial Organization.
Cheap Talk. When can cheap talk be believed? We have discussed costly signaling models like educational signaling. In these models, a signal of one’s.
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
EC941 - Game Theory Prof. Francesco Squintani Lecture 8 1.
1 1 Deep Thought BA 445 Lesson C.2 Cheap Talk when Interests Align A verbal contract isn’t worth the paper it’s written on. ~ Yogi Berra (Translation:
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Partial Blocking and Coordination of Meaning Anton Benz University of Southern Denmark, IFKI, Kolding.
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Beyond Nash: Raising the Flag of Rebellion Yisrael Aumann University of Haifa, 11 Kislev 5771 ( )
1 Introduction APEC 8205: Applied Game Theory. 2 Objectives Distinguishing Characteristics of a Game Common Elements of a Game Distinction Between Cooperative.
Communicating Agents in a Shared World Natalia Komarova (IAS & Rutgers) Partha Niyogi (Chicago)
Job Market Signaling (Spence model)
Signalling Games and Pragmatics Day II Anton Benz University of Southern Denmark, IFKI, Kolding.
Extensive Game with Imperfect Information Part I: Strategy and Nash equilibrium.
Semantics & Pragmatics (2)
EC941 - Game Theory Francesco Squintani Lecture 3 1.
Introduction to linguistics II
Game Theory and Gricean Pragmatics Anton Benz Zentrum für Allgemeine Sprachwissenschaften ZAS Berlin.
Social Choice Session 7 Carmen Pasca and John Hey.
Game Theory and Grice’ Theory of Implicatures Anton Benz.
Chapter 12 Choices Involving Strategy Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written.
Game Theory and Gricean Pragmatics Lesson II Anton Benz Zentrum für Allgemeine Sprachwissenschaften ZAS Berlin.
Learning in Multiagent systems
Signalling Games and Pragmatics Day V Anton Benz University of Southern Denmark, IFKI, Kolding.
© 2005 Pearson Education Canada Inc Chapter 15 Introduction to Game Theory.
EC941 - Game Theory Prof. Francesco Squintani Lecture 5 1.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Natural Information and Conversational Implicatures Anton Benz.
Dynamic Games & The Extensive Form
Signalling Games and Pragmatics Anton Benz University of Southern Denmark, IFKI, Kolding.
Extensive Games with Imperfect Information
3.1.4 Types of Games. Strategic Behavior in Business and Econ Outline 3.1. What is a Game ? The elements of a Game The Rules of the Game:
Part 3 Linear Programming
UNIT 2 - IMPLICATURE.
Frank Cowell: Games Uncertainty GAMES: UNCERTAINTY MICROECONOMICS Principles and Analysis Frank Cowell Almost essential Games: Mixed Strategies Almost.
1. 2 You should know by now… u The security level of a strategy for a player is the minimum payoff regardless of what strategy his opponent uses. u A.
Optimal answers and their implicatures A game-theoretic approach Anton Benz April 18 th, 2006.
Dynamic games, Stackelburg Cournot and Bertrand
Empirical Aspects of Plurality Elections David R. M. Thompson, Omer Lev, Kevin Leyton-Brown & Jeffrey S. Rosenschein COMSOC 2012 Kraków, Poland.
Econ 805 Advanced Micro Theory 1 Dan Quint Fall 2009 Lecture 1 A Quick Review of Game Theory and, in particular, Bayesian Games.
Game Theory and Gricean Pragmatics Lesson III Anton Benz Zentrum für Allgemeine Sprachwissenschaften ZAS Berlin.
Signalling Games and Pragmatics Day IV Anton Benz University of Southern Denmark, IFKI, Kolding.
2.5 The Fundamental Theorem of Game Theory For any 2-person zero-sum game there exists a pair (x*,y*) in S  T such that min {x*V. j : j=1,...,n} =
Dynamic Game Theory and the Stackelberg Model. Dynamic Game Theory So far we have focused on static games. However, for many important economic applications.
Word Choice, Sentence Variety, Topic Sentences, Thesis Statements.
Aguilar-Guevara & Schulpen
COOPERATIVE PRINCIPLE:
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 2 Bayesian Games Zhu Han, Dusit Niyato, Walid Saad, Tamer.
Multiagent Systems Game Theory © Manfred Huber 2018.
The Cooperative Principle
The Cooperative Principle
Normal Form (Matrix) Games
Unit II Game Playing.
Presentation transcript:

Two Theories of Implicatures (Parikh, Jäger) Day 3 – August, 9th

Overview Prashant Parikh: A disambiguation based approach Gerhard Jäger: A dynamic approach

A disambiguation based approach Prashant Parikh (2001) The Use of Language

Repetition: The Standard Example a) Every ten minutes a man gets mugged in New York. (A) b) Every ten minutes some man or other gets mugged in New York. (F) c) Every ten minutes a particular man gets mugged in New York. (F’)  How to read the quantifiers in a)?

Abbreviations  : Meaning of `every ten minutes some man or other gets mugged in New York.’  ’: Meaning of `Every ten minutes a particular man gets mugged in New York.’ θ 1 : State where the speaker knows that . θ 2 : State where the speaker knows that  ’.

A Representation

General Characteristics There is a form A that is ambiguous between meanings  and  ’. There are more complex forms F, F’ which can only be interpreted as meaning  and  ’. The speaker but not the hearer knows whether  (type θ 1 ) or  ’ (type θ 2 ) is true.

It is assumed that interlocutors agree on a Pareto Nash equilibria (S,H). The actual interpretation of a form is the meaning assigned to it by the hearer’s strategy H.

Implicatures

Classification of Implicatures Parikh (2001) distinguishes between: Type I implicatures: There exists a decision problem that is directly affected. Type II implicatures: An implicature adds to the information of the addressee without directly influencing any immediate choice of action.

Examples of Type I implicatures 1. A stands in front of his obviously immobilised car.  A: I am out of petrol.  B: There is a garage around the corner.  +>The garage is open and sells petrol. 2. Assume that speaker S and hearer H have to attend a talk just after 4 p.m. S utters the sentence:  S: It’s 4 p.m. (A)  +> S and H should go for the talk. (  )

A model for a type I implicature

The Example 2. Assume that speaker S and hearer H have to attend a talk just after 4 p.m. S utters the sentence:  S: It’s 4 p.m. (A)  +> S and H should go for the talk. (  )

The possible worlds The set of possible worlds Ω has elements: s 1 : it is 4 p.m. and the speaker wants to communicate the implicature  that it is time to go for the talk. s 2 : it is 4 p.m. and the speaker wants to communicate only the literal content .

The Speaker’s types Assumption: the speaker knows the actual world. Types:  θ 1 = {s 1 }: speaker wants to communicate the implicature .  θ 2 = {s 2 }: speaker wants to communicate the literal meaning .

Hearer’s expectations about speaker’s types Parikh’s model assumes that it is much more probable that the speaker wants to communicate the implicature . Example values: p(θ 1 ) = 0.7 and p(θ 2 ) = 0.3

The speaker’s action set The speaker chooses between the following forms: 1. A  It’s 4 pm. ([A] =  ) 2. B  It’s 4 pm. Let’s go for the talk. ([B] =  ) 3.   silence.

The hearer’s action set The hearer interprets utterances by meanings. Parikh’s model assumes that an utterance can be interpreted by any meaning  which is stronger than its literal meaning .

The Game Tree

The Utility Functions Parikh decomposes the utility functions into four additive parts: 1. A utility measure that depends on the complexity of the form and processing effort. 2. A utility measure that depends on the correctness of interpretation. 3. A utility measure that depends on the value of information. 4. A utility measure that depends on the intrinsic value of the implicated information.

Utility Value of Information Derived from a decision problem. Hearer has to decide between:  going to the talk  stay probabilitystate goingstaying 0.2time to go not time to go-210

Utility Value of Information Before learning ‘It’s 4 p.m.’:  EU(leave) = 0.2× ×(-2) = 0.4  EU(not-leave) = 0.2×(-10) + 0.8×10 = 6 After learning ‘It’s 4 p.m.’(A), hence that it is time to leave:  EU(leave|A) = 1×10 = 10  EU(not-leave|A) = 1×(-10) = -10 Utility value of learning ‘It’s 4 p.m.’ (A):  UV(A) = EU(leave|A) - EU(not-leave) = 10 – 6 = 4

Other Utilities Intrinsic Value of Implicature: 5 Cost of misinterpretation -2  In addition, Parikh assumes that in case of miscommunication the utility value of information is lost (*) Various costs due to complexity and processing effort.  Higher for speaker than hearer.

The Game Tree

Some Variations of the Payoffs (4+5) a)without (*) b)minus utility value c)minus intr. val. of implic. d)minus both

Result In all variations it turns out that the strategy pair (S,H) with  S(θ 1 ) = It’s 4 p.m., S(θ 2 ) = silence, and  H(It’s 4 p.m) = [It’s 4 p.m]  [Let’s go to the talk] is Pareto optimal.

A Dynamic Approach Gerhard Jäger (2006) Game dynamics connects semantics and pragmatics

General Jäger (2006) formulates a theory of implicatures in the framework of Best Response Dynamic (Hofbauer & Sigmund, 1998), which is a variation of evolutionary game theory. We will reformulate his theory using Cournot dynamics, a non–evolutionary and technically much simpler learning model.

Overview An Example: Scalar Implicatures The Model Other Implicatures

An Example Scalar Implicatures

The Example We consider the standard example:  Some of the boys came to the party.  +> Not all of the boys came to the party.

Possible Worlds

Possible Forms and their Meanings

Complexities F 1, F 2, and F 3 are about equally complex. F 4 is much more complex than the other forms. It is an essential assumption of the model that F 4 is so complex that the speaker will rather be vague than using F 4.

The first Stage Hearer’s strategy determined by semantics. Speaker is truthful, else the strategy is arbitrary.

The second Stage Hearer’s strategy unchanged. Speaker chooses best strategy given hearer’s strategy.

The third Stage Speaker’s strategy unchanged. Hearer chooses best strategy given speaker’s strategy.

Result The third stage is stabile. Neither the speaker nor the hearer can improve the strategy. The form F 1 : `Some of the boys came to the party.’  is now interpreted as meaning that some but not all of them came. This explains the implicature.

The Model

The Signalling Game Ω = {w 1,w 2,w 3 } the set of possible worlds. Θ = {θ 1,θ 2,θ 3 } = {{w 1 },{w 2 },{w 3 }} the set of speaker’s types. (Speaker knows true state of the world) p(θ i )=1/4: hearer’s expectation about types. A 1 = {F 1,F 2,F 3,F 3 } the speaker’s action set. A 2 =  (Ω) the hearer’s action set. (Speaker chooses a Form, hearer an interpretation)

The payoff function divides in two additive parts:  c(.): measures complexity of forms: c(F 1 ) = c(F 2 ) = c(F 3 ) = 1; c(F 4 ) = 3.  inf(θ,M): measures informativity of information M  Ω relative to speaker’s type θ = {w}:

The game is a game of pure coordination, i.e. speaker’s and hearer’s utilities coincide:

Additional Constraints It is assumed that the speaker cannot mislead the hearer; i.e. if the speaker knows that the hearer interprets F as M, then he can only use F if he knows that M is true, i.e. if θ  M.

The Dynamics The dynamic model consists of a sequence of synchronic stages. Each synchronic stage is a strategy pair (S i,H i ), i = 1,…,n In the first stage (i=1),  the hearer interprets forms by their (literal) semantic meaning.  the speaker’s strategy is arbitrary.

The Second Stage (S 2,H 2 ) The hearer’s strategy H 2 is identical to H 1. The speaker’s strategy S 2 is a best response to H 1 : EU(S 2,H 2 ) = max S EU(S,H 2 ) with EU(S,H) =  θ  Θ u(θ,S(θ),H(S(θ)))

The Third Stage (S 3,H 3 ) The speaker’s strategy S 3 is identical to S 2. The hearer’s strategy H 3 is a best response to S 3 : EU(S 3,H 3 ) = max H EU(S 3,H)

This process is iterated until choosing best responses doesn’t improve strategies. The resulting strategy pair (S,H) must be a weak Nash equilibrium. Remark: Evolutionary Best Response would stop only if strong Nash equilibria are reached.

Implicatures An implicature F +>  is explained if in the final stable state H(F) = .

Other Implicatures

I-Implicatures What is expressed simply is stereotypically exemplified. 1. John’s book is good. +> The book that John is reading or that he has written is good. 2. A secretary called me in. +> A female secretary called me in. 3. There is a road to the right. +> There is a hard-surfaced road to the right.

An Example There is a road to the right. w 1 : hard surfaced road. w 2 : soft surfaced road. F 1 : road F 2 : hard surfaced road F 3 : soft surfaced road

The first Stage Hearer’s strategy determined by semantics. Speaker is truthful, else the strategy is arbitrary.

The second Stage Hearer’s strategy unchanged. Speaker chooses best strategy given hearer’s strategy.

The third Stage Speaker’s strategy unchanged. Hearer chooses best strategy given speaker’s strategy. Any interpretation of F 2 below yields a best response.

M-implicatures What is said in an abnormal way isn’t normal. 1. Bill stopped the car. +> He used the foot brake. 2. Bill caused the car to stop. +> He did it in an unexpected way. 3. Sue smiled. +> Sue smiled in a regular way. 4. Sue lifted the corners of her lips. +> Sue produced an artificial smile.

An Example 1. Sue smiled. +> Sue smiled in a regular way. 2. Sue lifted the corners of her lips. +> Sue produced an artificial smile. w 1 : Sue smiles genuinely. w 2 : Sue produces artificial smile. F 1 : to smile. F 2 : to lift the corners of the lips.

The first Stage Hearer’s strategy determined by semantics. Speaker is truthful, else the strategy is arbitrary.

The second Stage Hearer’s strategy unchanged. Speaker chooses best strategy given hearer’s strategy.

The third Stage Speaker’s strategy unchanged. Hearer chooses best strategy given speaker’s strategy. Any interpretation of F 2 below yields a best response.

The third Stage continued There are three possibilities:

A fourth Stage Speaker’s optimisation can then lead to:

A fifth Stage Speaker’s optimisation can then lead to: Horn Anti-Horn