Computational Models for Argumentation in MAS

Slides:



Advertisements
Similar presentations
1 Knowledge Representation Introduction KR and Logic.
Advertisements

Kees van Deemter Matthew Stone Formal Issues in Natural Language Generation Lecture 4 Shieber 1993; van Deemter 2002.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
On norms for the dynamics of argumentative interaction: argumentation as a game Henry Prakken Amsterdam January 18, 2010.
Argumentation Based on the material due to P. M. Dung, R.A. Kowalski et al.
Commonsense Reasoning and Argumentation 14/15 HC 8 Structured argumentation (1) Henry Prakken March 2, 2015.
Commonsense Reasoning and Argumentation 14/15 HC 9 Structured argumentation (2) Henry Prakken March 4, 2015.
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Coordination between Logical Agents Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics CLIMA-V September 2004.
Commonsense Reasoning and Argumentation 14/15 HC 10: Structured argumentation (3) Henry Prakken 16 March 2015.
Timed Automata.
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
CS 886: Electronic Market Design Social Choice (Preference Aggregation) September 20.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Conferences: Facilitate Change Conflict Problem Solving Negotiation.
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
Argumentation-based negotiation Rahwan, Ramchurn, Jennings, McBurney, Parsons and Sonenberg, 2004 Presented by Jean-Paul Calbimonte.
Some problems with modelling preferences in abstract argumentation Henry Prakken Luxemburg 2 April 2012.
Commonsense Reasoning and Argumentation 14/15 HC 13: Dialogue Systems for Argumentation (1) Henry Prakken 25 March 2015.
Argumentation in Artificial Intelligence Henry Prakken Lissabon, Portugal December 11, 2009.
Reasoning Lindsay Anderson. The Papers “The probabilistic approach to human reasoning”- Oaksford, M., & Chater, N. “Two kinds of Reasoning” – Rips, L.
FINDING THE LOGIC OF ARGUMENTATION Douglas Walton CRRAR Coimbra, March 24, 2011.
Chapter 12: Expert Systems Design Examples
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
NATURE OF ARGUMENT What is argument?  Monty Python sketch: “I’d like to have an argument”
Multiagent Systems and Societies of Agents
Argumentation Logics Lecture 4: Games for abstract argumentation Henry Prakken Chongqing June 1, 2010.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Henry Prakken SIKS Basic Course Learning and Reasoning May 26 th, 2009.
Argumentation Logics Lecture 7: Argumentation with structured arguments (3) Henry Prakken Chongqing June 4, 2010.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Logics Lecture 3: Abstract argumentation semantics (3) Henry Prakken Chongqing May 28, 2010.
Argumentation Logics Lecture 4: Games for abstract argumentation Henry Prakken Chongqing June 1, 2010.
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
Argumentation in Agent Systems Part 2: Dialogue Henry Prakken EASSS
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Principled Negotiation 4 Scholars from the Harvard Negotiation Project have suggested ways of dealing with negotiation from a cooperative and interest-
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Develop your Leadership skills
Managing Social Influences through Argumentation-Based Negotiation Present by Yi Luo.
Induction and recursion
Chapter 4: Lecture Notes
Applying Belief Change to Ontology Evolution PhD Student Computer Science Department University of Crete Giorgos Flouris Research Assistant.
Argumentation and Trust: Issues and New Challenges Jamal Bentahar Concordia University (Montreal, Canada) University of Namur, Belgium, June 26, 2007.
1 Dept of Information and Communication Technology Creating Objects in Flexible Authorization Framework ¹ Dep. of Information and Communication Technology,
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
An Ontological Framework for Web Service Processes By Claus Pahl and Ronan Barrett.
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.
Arguing Agents in a Multi- Agent System for Regulated Information Exchange Pieter Dijkstra.
Commonsense Reasoning and Argumentation 14/15 HC 14: Dialogue systems for argumentation (2) Henry Prakken 30 March 2015.
A Quantitative Trust Model for Negotiating Agents A Quantitative Trust Model for Negotiating Agents Jamal Bentahar, John Jules Ch. Meyer Concordia University.
Negotiation Skills Mike Phillips Training Quality Manager
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
On the Semantics of Argumentation 1 Antonis Kakas Francesca Toni Paolo Mancarella Department of Computer Science Department of Computing University of.
Building Blocks of Scientific Research Chapter 5 References:  Business Research (Duane Davis)  Business Research Methods (Cooper/Schindler) Resource.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
Belief dynamics and defeasible argumentation in rational agents M. A. Falappa - A. J. García G. R. Simari Artificial Intelligence Research and Development.
Argumentation Logics Lecture 2: Abstract argumentation grounded and stable semantics Henry Prakken Chongqing May 27, 2010.
AF1: Thinking Scientifically
Intelligent Agents Chapter 2.
Alternating tree Automata and Parity games
Henry Prakken COMMA 2016 Berlin-Potsdam September 15th, 2016
Henry Prakken Chongqing May 27, 2010
A Recursive Approach to Argumentation: Motivation and Perspectives
Presentation transcript:

Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France amgoud@irit.fr Utrecht University

Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

The notion of agent (Wooldridge 2000) An agent is a computer system that is capable of autonomous (i.e. independent) action on behalf of its user or owner (figuring out what needs to be done to satisfy design objectives, rather constantly being told) Rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit

The notion of agent An agent needs the ability to make internal reasoning: Reasoning about beliefs, desires, … Handling inconsistencies Making decisions Generating, revising, and selecting goals ...

Multi-agent systems (Wooldrige 2000) A multi-agent system is one that consists of a number of agents, which interact with one another Generally, agents will be acting on behalf of users of different goals and motivations To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other

they need to engage in dialogues Multi-agent systems Agents need to: exchange information and explanations resolve conflicts of opinions resolve conflicts of interests make joint decisions they need to engage in dialogues

Dialogue types (Walton & Krabbe 1995)

The role of argumentation Argumentation plays a key role for achieving the goals of the above dialogue types Argument = Reason for some conclusion (belief, action, goal, etc.) Argumentation = Reasoning about arguments  decide on conclusion Dialectical argumentation = Multi-party argumentation through dialogue

The role of argumentation Argumentation plays a key role for reaching agreements: Additional information can be exchanged The opinion of the agent is explicitly explained (e.g. arguments in favor of opinions or offers, arguments in favor of a rejection or an acceptance) Agents can modify/revise their beliefs / preferences / goals To influence the behavior of an agent (threats, rewards)

A persuasion dialogue P : The newspapers have no right to publish information I. C : Why? P : Because it is about X's private life and X does not agree (P1) C : The information I is not private because X is a minister and all information concerning ministers is public (C1) P : But X is not a minister since he resigned last month (P2) P2  C1  P1

A negotiation dialogue Buyer: Can’t you give me this 806 a bit cheaper? Seller: Sorry that’s the best I can do. Why don’t you go for a Polo instead? Buyer: I have a big family and I need a big car (B1) Seller: Modern Polo are becoming very spacious and would easily fit in a big family. (S1) Buyer: I didn’t know that, let’s also look at Polo then.

Why study argumentation in agent technology? For internal reasoning of single agents: Reasoning about beliefs, goals, ... Making decisions Generating, revising, and selecting goals For interaction between multiple agents: Exchanging information and explanations Resolving conflicts of opinions Resolving conflicts of interests Making joint decisions

Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

Defeasible reasoning Reasoning is generally defeasible Assumptions, exceptions, uncertainty, ... AI formalises such reasoning with non-monotonic logics Default logic, etc … New premisses can invalidate old conclusions Argumentation logics formalise defeasible reasoning as construction and comparison of arguments

Argumentation process Defining the interactions between arguments Evaluating the strengths of arguments Defining the status of arguments Drawing conclusions using a consequence relation Comparing decisions using a given principle Inference problem Decision making problem Constructing arguments

Main challenges Q1: What are the different types of arguments ? How do we construct arguments ? Q2: How can an argument interact with another argument ? Q3: How do we compute the strength of an argument ? Q4: How do we determine the status of arguments Q5: How do we conclude ? How decisions are compared on the basis of their arguments? Q6: What are the properties that an argumentation system should satisfy ?

Q1: Building arguments Types of arguments: (Kraus et al. 98, Amgoud & Prade 05) Explanations (involve only beliefs) Tweety flies because it is a bird Threats (involve beliefs + goals) You should do  otherwise I will do  You should not do  otherwise I will do  Rewards (involve beliefs + goals) If you do , I will do  If you don’t do , I will do  …

Q1: Building arguments Forms of arguments: An inference tree grounded in premises A deduction sequence A pair (Premises, Conclusion), leaving unspecified the particular proof that leads from the Premises to the Conclusion

Q1: Building arguments Example 1. (Inference problem) Let S be a propositional knowledge base An argument A is a paire A = (H, h) such that: 1. H Í S 2. H is consistent 3. H |- h 4. H is minimal (for set inclusion) satisfying 1, 2 and 3

Q1: Building arguments A: ({p, p® b, b® f}, f) B: ({p, p®Øf}, Øf) 2 3 S p: pinguin b: bird f: fly

Q1: Building arguments Example 2. (Decision problem) S = a propositional knowledge base G = a goals base D = a set of decision options An argument in favor of a decision d is a triple A = <S, g, d> s.t. 1. d  D 2. g  G 3. S  S 4. S  {d} is consistent 5. S  {d} |- g 6. S is minimal (set ) satisfying the above conditions

Q1: Building arguments r: rain w: wet c: cloud u: umbrella l: overloaded D = {u, ¬u} A = <{u  ¬w}, {¬w}, u> B = <{¬u  ¬l}, {¬l}, ¬u> c, u  l ¬u  ¬l u  ¬w r  ¬u  w ¬r  ¬w c  r 1  S ¬w ¬l G 

Q2: Interactions between arguments Three conflict relations: Rebutting attacks: two arguments with contradictory conclusions Assumption attacks: an argument attacks an assumption of another argument Undercutting attacks: an argument undermines some intermediate step (inference rule) of another argument

Rebutting attacks Tweety flies because it is a bird versus Tweety does not fly because it is a penguin Tweety flies ¬Tweety flies

Assumption attacks Tweety flies because it is a bird, and it is not provable that Tweety is a penguin versus Tweety is a penguin Tweety flies Penguin Tweety Not(Penguin Tweety)

Undercutting attack An argument challenges the connection between the premisses and the conclusion Tweety flies because all the birds I ’ve seen fly a b c d I ’ve seen Opus, it is a bird and it does not fly ¬[a, b, c /d]

Q3: Strengths of arguments Why do we need to compute the strengths of arguments ? To compare arguments To refine the status of arguments by removing some attacks To define decision principles

Q3: Strengths of arguments The strength of an argument depends on the quality of information used to build that argument Examples: Weakest link principle (Benferhat & al. 95, Amgoud 96) Last link principle (Prakken & Sartor 97) Specificity principle (Simari & Loui 92) ... Preference relation between data Strength of an argument Preference relation between arguments

Q3: Strengths of arguments Example 1. (Weakest link principle) A: ({p, pb, b f}, f) B: ({p, pf}, f) Strength(A) = 3 Strength(B) = 2 Then B is preferred to (stronger than) A p pb pf bf 1 2 3

Q3: Strengths of arguments Example 2. A = <{u  ¬w}, {¬w}, u> B = <{¬u  ¬l}, {¬l}, ¬u> Strength(A) = (1, 1) Strength(B) = (1, ) Different preference relations between such arguments are defined (Amgoud, Prade 05) c, u  l ¬u  ¬l u  ¬w r  ¬u  w ¬r  ¬w c  r 1  K ¬w ¬l G  1

Q4: Status of arguments Some attacks can be removed Defeat = Attack + Preference relation between arguments Attacking and not weaker Defeat Attacking and stronger Strict Defeat A B < > A does not defeat B A strictly defeats B

Q4: Status of arguments Given <Args, Defeat>, what is the status of a given argument A  Args? Three classes of arguments Arguments with which a dispute can be won (justified) Arguments with which a dispute can be lost (rejected) Arguments that leave the dispute undecided

Q4: Status of arguments Two ways for computing the status of arguments: The declarative form usually requires fixed-point definitions, and establishes certain sets of arguments as acceptable Acceptability semantics The procedural form amounts to defining a procedure for testing whether a given argument is a member of « a set of acceptable arguments » Proof theory

Acceptability semantics Semantics = specifies conditions for labelling the argument graph The labelling should: accept undefeated arguments capture the notion of reinstatement A B C A reinstates C

Acceptability semantics Example of labelling: L: Args  {in, out, und} An argument is in if all its defeaters are out An argument is out if it has a defeater that is in An argument is und otherwise

Acceptability semantics Example 1: Only one possible labelling: A B C in out

Acceptability semantics Example 2: Two possible labellings: A B and

Acceptability semantics Two approaches: A unique status approach An argument is justified iff it is in An argument is rejected if it is out An argument is undecided it is und A multiple status approach An argument is justified iff it is in in any labelling An argument is rejected if it is out in any labelling An argument is undecided it is in in some labelling and out in others

Acceptability semantics Unique status: Grounded semantics (Dung 95) E1 = all undefeated arguments E2 = E1 + all arguments reinstated by E1 … It exists only if there are undefeated arguments

Acceptability semantics Problem with grounded semantics: floating arguments A B C D We want

Acceptability semantics Multiple labellings: A B C D D is justified and C is rejected

Proof theories Let <Args, Defeat> be an AS S1, …, Sn its extensions under a given semantics. Problem: Let a  Args Is a in one extension ? Is a in every extension ?

Proof theories Let a  Args. Problem: Is a in the grounded extension ? Example: A0 A4 A5 A3 A1 A2 A6

Proof theories (Amgoud & Cayrol 00) A dialogue is a non-empty sequence of moves s.t: Movei = (Playeri, Argi) (i  0) where: Playeri = P iff i is even, Playeri = C iff i is odd Player0 = P and Arg0 = a If Playeri = Playerj = P and i ¹ j then Argi ¹ Argj If Playeri = P (i > 1) then Argi strictly defeats Argi-1 If Playeri = C then Argi defeats Argi-1

Proof theories (Amgoud & Cayrol 00) A dialogue tree is a finite tree where each branch is a dialogue P C A0 A4 A5 A3 A1 A2 A6 Dialogue tree <Args, Defeat>

Proof theories A player wins a dialogue iff it ends the dialogue P C won by P won by C

Proof theories A candidate sub-tree is a sub-tree of the dialogue tree containing all the edges of an even move (P) and exactly one edge of an odd move (C) A solution sub-tree is a candidate subtree whose branches are all won by P P wins a dialogue tree iff the dialogue tree has a solution sub-tree Complete construction: ‘ a ’  the grounded extension iff  a dialogue tree whose root is ‘ a ’ and won by P

Proof theories P C A0 A4 A5 A3 A1 A2 A6 Two candidate sub-trees: Each branch of S2 is won by P  S2 is a solution sub-tree  A0 is in the grounded extension P C A0 A4 A5 A3 A1 A2 A6 S1 S2

Q5: Consequence relations : a knowledge base built from a logical language L, x: a formula of L <Args, Defeat>: an argumentation system S1, …, Sn: the extensions under a given semantics.  |~ x iff  an argument A for x s.t. A  Si,  Si, i = 1, …, n  |~ x iff  Si,  an argument A for x, and A  Si  |~ x iff  Si st  an argument A for x and A  Si, and  Sj st  an argument A for x and A  Si  |~ x iff  Si st  an argument A for x and A  Si

Q5: Making decisions Problem = to define a preordering on D D = a set of decision options Problem = to define a preordering on D <Args, Defeat> = an argumentation system Let d  D <P1, …, Pn, C1, …, Cm> Arg. PRO d Arg. CON d

E = Acceptable arguments Q5: Making decisions ArgP(d) = the arguments in E which are PRO d ArgC(d) = the arguments in E which are CON d Args E = Acceptable arguments

Q5: Making decisions Decision principles: 3 categories of principles: (Amgoud, Prade 2004-2006) Unipolar principles = only one kind of arguments (PRO or CON) is involved Bipolar principles = both arguments PRO and CON are involved Non-polar principles

Q5: Making decisions Unipolar principles: Let d, d’  D. Counting arguments PRO: d d’ iff |ArgP(d)| > |ArgP(d’)| Counting arguments CON: d d’ iff |ArgC(d)| < |ArgC(d’)| Promotion focus: d  d’ iff  P  ArgP(d) st.  P’  ArgP(d’), P is stronger than P’. Prevention focus: d d’ iff  C’  ArgC(d’) st.  C  ArgC(d), C’ is stronger than C.

Q5: Making decisions Bipolar principles: Let d, d’  D. d  d’ iff  P  ArgP(d) s.t.  P’  ArgP(d’), P is stronger than P’, and  C’  ArgC(d’) s.t.  C  ArgC(d), C’ is stronger than C

Q5: Making decisions Non-polar principles: Let d, d’  D <P1, …, Pn, C1, …, Cm> <P’1, …, P’k, C’1, …, C’l>  ’ d  d’ iff  is stronger than ’

Q5: Rationality postulates Idea: What are the properties/rationality postulates that any AS should satisfy ? (Amgoud, Caminada 05) Consistency = AS should ensure safe conclusions the set {x |  |~ x } should be consistent the set of conclusions of each extension should be consistent Closedness = AS should not forget safe conclusions the set {x |  |~ x } should be closed the set of conclusions of each extension should be closed

Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

Dialogue systems CS1 CSn Claim p Argue (S, p) …. ………………… CS1 CSn Claim p Argue (S, p) …. An argumentation system for evaluating the outcome of the dialogue

Components of a dialogue system Communication language + Domain language Protocol = the set of rules for generating coherent dialogues Agent Strategies = the set of tactics used by the agents to choose a move to play Outcome One of a set of possible deals, or Conflict Protocol + Strategies Outcome

Communication language A syntax = a set of locutions, utterances or speech acts (Propose, Argue, Accept, Reject, etc.) A semantics = a unique meaning for each utterance Mentalistic approaches Social approaches Protocol-based approaches

Dialogue Protocol Protocol is public and independent from the mental states of the agents Main parameters: the set of allowed moves (e.g. Claim, Argue, ...) the possible replies for each move the number of moves per turn the turntaking the notion of Backtracking the computation of the outcome Identifies more or less rich dialogues

<Args(CS1  …  CSn), Defeat>, or Dialogue Protocol Computing the outcome: two approaches The protocol is equipped with an argumentation system that evaluates the content of CS1  …  CSn: <Args(CS1  …  CSn), Defeat>, or <Args, Defeat> The rules of the proof theory are encoded in the protocol

Dialogue Protocol For persuasion dialogues, the two approaches return the same result if: the other parameters are fixed in the same way the acceptability semantics used is the same

Dialogue Strategies BDI agent Protocol CS1 … CSn Set of allowed replies Argumentation-based decision model Next move to play Move = Locution + Content locution argument type argument offer

Dialogue Strategies Different arguments are exchanged: about beliefs about goals Eg. I have a big family and I need a big car referring to plans (instrumental arguments) Eg. Modern Polo are becoming very spacious and would easily fit in a big family Which argument to present and when? Need of a formal model for practical reasoning

Dialogue Strategies Dialogue strategies are at early stages Thus, not possible yet to characterize the outcome of the dialogue, i.e. when the outcome is optimal, etc.

Open issues How goals are generated ? How / when are they revised ? Do we always privilege new goals ? The answer = NO Threats ---> the goal can be adopted Rewards ---> the goal can be ignored AGM postulates for revising goals ?

Thank you