An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.

Slides:



Advertisements
Similar presentations
Modelling uncertainty in 3APL Johan Kwisthout Master Thesis
Advertisements

Introduction to Truth Maintenance Systems A Truth Maintenance System (TMS) is a PS module responsible for: 1.Enforcing logical relations among beliefs.
Logic Programming Automated Reasoning in practice.
WIMS 2014, June 2-4Thessaloniki, Greece1 Optimized Backward Chaining Reasoning System for a Semantic Web Hui Shi, Kurt Maly, and Steven Zeil Contact:
Computational Models for Argumentation in MAS
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Formal Modelling of Reactive Agents as an aggregation of Simple Behaviours P.Kefalas Dept. of Computer Science 13 Tsimiski Str Thessaloniki Greece.
Ilias Tachmazidis 1,2, Grigoris Antoniou 1,2,3, Giorgos Flouris 2, Spyros Kotoulas 4 1 University of Crete 2 Foundation for Research and Technology, Hellas.
Coordination between Logical Agents Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics CLIMA-V September 2004.
Commonsense Reasoning and Argumentation 14/15 HC 10: Structured argumentation (3) Henry Prakken 16 March 2015.
Default Reasoning the problem: in FOL, universally-quantified rules cannot have exceptions –  x bird(x)  can_fly(x) –bird(tweety) –bird(opus)  can_fly(opus)
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Propositional Logic Reading: C , C Logic: Outline Propositional Logic Inference in Propositional Logic First-order logic.
Knowledge Representation and Reasoning  Representação do Conhecimento e Raciocínio Computacional José Júlio Alferes and Carlos Viegas Damásio.
Inferences The Reasoning Power of Expert Systems.
Argumentation in Artificial Intelligence Henry Prakken Lissabon, Portugal December 11, 2009.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
1 OSCAR: An Architecture for Generally Intelligent Agents John L. Pollock Philosophy and Cognitive Science University of Arizona
Adding Organizations and Roles as Primitives to the JADE Framework NORMAS’08 Normative Multi Agent Systems, Matteo Baldoni 1, Valerio Genovese 1, Roberto.
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
SOCS 1 GamePlan: AN ADVERSARIAL PLANER IN MULTI-AGENT SYSTEM (A Preliminary Report) Wenjin Lue and Kostas Stathis City University, London Dagstuhl, 26th.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Firewall Policy Queries Author: Alex X. Liu, Mohamed G. Gouda Publisher: IEEE Transaction on Parallel and Distributed Systems 2009 Presenter: Chen-Yu Chang.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University.
1 Trust Management and Theory Revision Ji Ma School of Computer and Information Science University of South Australia 24th September 2004, presented at.
Proof System HY-566. Proof layer Next layer of SW is logic and proof layers. – allow the user to state any logical principles, – computer can to infer.
On-the-fly Model Checking from Interval Logic Specifications Manuel I. Capel & Miguel J. Hornos Dept. Lenguajes y Sistemas Informáticos Universidad de.
1 Chapter 9 Rules and Expert Systems. 2 Chapter 9 Contents (1) l Rules for Knowledge Representation l Rule Based Production Systems l Forward Chaining.
Let remember from the previous lesson what is Knowledge representation
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Semantics with Applications Mooly Sagiv Schrirber html:// Textbooks:Winskel The.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
CS-EE 481 Spring Founders Day, 2005 University of Portland School of Engineering Project Pocket Gopher Conversational Learning Agent Team Josh Jones.
1. Motivation Knowledge in the Semantic Web must be shared and modularly organised. The semantics of the modular ERDF framework has been defined model.
Artificial Intelligence Introduction (2). What is Artificial Intelligence ?  making computers that think?  the automation of activities we associate.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Applying Belief Change to Ontology Evolution PhD Student Computer Science Department University of Crete Giorgos Flouris Research Assistant.
Understanding PML Paulo Pinheiro da Silva. PML PML is a provenance language (a language used to encode provenance knowledge) that has been proudly derived.
1 A Theoretical Framework for Association Mining based on the Boolean Retrieval Model on the Boolean Retrieval Model Peter Bollmann-Sdorra.
Combining Answer Sets of Nonmonotonic Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics.
Learning Automata and Grammars Peter Černo.  The problem of learning or inferring automata and grammars has been studied for decades and has connections.
Fall 98 Introduction to Artificial Intelligence LECTURE 7: Knowledge Representation and Logic Motivation Knowledge bases and inferences Logic as a representation.
Formal Models in AGI Research Pei Wang Temple University Philadelphia, USA.
LDK R Logics for Data and Knowledge Representation ClassL (part 3): Reasoning with an ABox 1.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
©Ferenc Vajda 1 Semantic Grid Ferenc Vajda Computer and Automation Research Institute Hungarian Academy of Sciences.
Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.
EVOlving L ogic P rograms J.J. Alferes, J.A. Leite, L.M. Pereira (UNL) A. Brogi (U.Pisa)
A Mathematical Comment on the Fundamental Difference Between Scientific Theory Formation and Legal Theory Formation Ronald P. Loui St. Louis USA.
Zhuo Peng, Chaokun Wang, Lu Han, Jingchao Hao and Yiyuan Ba Proceedings of the Third International Conference on Emerging Databases, Incheon, Korea (August.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
A View-based Methodology for Collaborative Ontology Engineering (VIMethCOE) Ernesto Jiménez Ruiz Rafael Berlanga Llavorí Temporal Knowledge Bases Group.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
1 Reasoning with Infinite stable models Piero A. Bonatti presented by Axel Polleres (IJCAI 2001,
NP-Completeness  For convenience, the theory of NP - Completeness is designed for decision problems (i.e. whose solution is either yes or no).  Abstractly,
The International RuleML Symposium on Rule Interchange and Applications Visualization of Proofs in Defeasible Logic Ioannis Avguleas 1, Katerina Gkirtzou.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Belief dynamics and defeasible argumentation in rational agents M. A. Falappa - A. J. García G. R. Simari Artificial Intelligence Research and Development.
1 Representing and Reasoning on XML Documents: A Description Logic Approach D. Calvanese, G. D. Giacomo, M. Lenzerini Presented by Daisy Yutao Guo University.
Chapter 7. Propositional and Predicate Logic
CS 4700: Foundations of Artificial Intelligence
On the Designing of Popular Packages
Chapter 7. Propositional and Predicate Logic
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Presentation transcript:

An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer Science and Engineering U NIVERSIDAD N ACIONAL DEL S UR ARGENTINA

ArgMAS New York2 Outline  Motivation  The Argumentation Framework  Potential Arguments  Conclusions

ArgMAS New York3 Introduction  In this presentation we will show how a Logic Programming approache to argumentation may be suitable for applications in MAS.  Here, we present ODeLP which is an argument- based formalism for knowledge representation and reasoning in dynamic environments.  ODeLP uses defeasible argumentation to decide between conflicting goals.  We will begin presenting the general framework of DeLP of which ODeLP is a restriction.

ArgMAS New York4 Deafeasible Logic Programming: DeLP A Defeasible Logic Program ( dlp ) is a set of facts, strict and defeasible rules denoted  = ( ,  ) bird ( X )  chicken ( X ) chicken ( tina ) bird ( X )  penguin ( X ) penguin ( opus )  flies ( X )  penguin ( X ) scared ( tina ) flies ( X )  bird ( X )  flies ( X )  chicken ( X ) flies ( X )  chicken ( X ), scared ( X ) Strict Rules Facts Defeasible Rules  

ArgMAS New York5 Argument Def: Let L be a literal and   ( ,  ) be a program.  , L  is an argument for L, if  is a set of rules in  such that: 1)There exists a defeasible derivation of L from    ; 2)The set    is non contradictory; and 3)  is minimal, that is, there is no proper subset  of  such that  satisfies 1) and 2).

ArgMAS New York6 An example poor_perf ( john ). sick ( john ). poor_perf ( peter ). unruly ( peter ). suspend ( X )   responsible ( X ). suspend ( X )  unruly ( X ).  suspend ( X )  responsible ( X ).  responsible ( X )  poor_perf ( X ). responsible ( X )  good_perf ( X ). responsible ( X )  poor_perf ( X ), sick ( X ). ?- suspend ( john ).

poor_perf ( john ). sick ( john ). good_perf ( peter ). unruly ( peter ) suspend ( X )   responsible ( X ). suspend ( X )  unruly ( X ).  suspend ( X )  responsible ( X ).  responsible ( X )  poor_perf ( X ). responsible ( X )  good_perf ( X ). responsible ( X )  poor_perf ( X ), sick ( X ).  {  suspend ( john )  responsible ( john )., responsible ( john )  poor_perf ( john ), sick ( john ).},  suspend ( john )   suspend ( john ) responsible ( john ) poor_perf ( john ) sick ( john ) poor_perf ( john ) An argument for  suspend ( john ) built from the program

 suspend ( john ) responsible ( john ) poor_perf ( john ) sick ( john ) poor_perf ( john )  , Q  is a subargument of  , L  if  is an argument for Q and     = { responsible ( john )  poor_perf ( john ), sick ( john ).}  = {  suspend ( john )  responsible ( john )., responsible ( john )  poor_perf ( john ), sick ( john ).}

Counter-arguments   { suspend ( john )  suspend ( john )}  suspend ( john ) responsible ( john ) poor_perf ( john ) sick ( john ) poor_perf ( john ) poor_perf ( john ). sick ( john ). good_perf ( peter ). unruly ( peter ) suspend ( X )   responsible ( X ). suspend ( X )  unruly ( X ). suspend ( X )   responsible ( X ).  suspend ( X )  responsible ( X ).  responsible ( X )  poor_perf ( X ). responsible ( X )  good_perf ( X ). responsible ( X )  poor_perf ( X ), sick ( X ). responsible ( john ) poor_perf ( john ) sick ( john ) poor_perf ( john )   { responsible ( john ),  responsible ( john )} suspend ( john )  responsible ( john ) poor_perf ( john )

ArgMAS New York10 An argument  , P  is a proper defeater for  , L  if  , P  is a counter-argument  , L  that atacks a subargument  , Q  de  , L  and  , P  is better than  , Q  (by some comparison criterion). Proper Defeater responsible ( john ) poor_perf ( john ) sick ( john ) poor_perf ( john ) suspend ( john )  responsible ( john ) poor_perf ( john )

ArgMAS New York11 An argument  , P  is a proper defeater for  , L  if  , P  is a counter-argument  , L  that atacks a subargument  , Q  de  , L  and  , P  is not comparable to  , Q  (by some comparison criterion) Blocking Defeater suspend ( john ) unruly ( john )  suspend ( peter ) responsible ( peter ) good_perf ( peter )

00 11 22 33 22 33 44 33 44 55 11 22 Dialectical Tree Given a program  = ( ,  ), a literal L will be warranted if there is an argument  , L  built from , and that argument has a dialectical tree whose root node is marked U. That is, argument  , L  is an argument for which all the possible defeaters have been defeated. We will say that  is a warrant for L.   , L 

 *  , L  Marking of a Dialectical Tree  U U D U U U U U D D D D

ArgMAS New York14 Answers in DeLP Given a program  = ( ,  ), and a query for L the posible answers are: YES, if L is warranted. NO, if  L is warranted. UNDECIDED, if neither L nor  L are warranted. UNKNOWN, if L is not in the language of the program.

ArgMAS New York15 Observation based DeLP  In ODeLP we will restrict the program that represents the agent’s knowledge base  to a set  of facts and a set  of defeasible rules.  We will denote the knowledge base     The restriction of the non-defeasible part of  to facts, eliminating strict rules, is a change that has no effect in the capabilities of the system but makes belief revision coming for new observations easier.

ArgMAS New York16 Beliefs and Perception  The set of agent’s beliefs is formed by the warranted literals, i.e., those that are supported by an undefeated argument.  From agent’s new perceptions, beliefs could change.  Our view of perception is simple and relies on the assumption that observations are correct.  If new perceptions are in conflict with old ones, new perceptions are always preferred.

ArgMAS New York17        (    )   { O 1, …, O n }  Beliefs and Perception  If new perceptions are in conflict with old ones, new perceptions are always preferred.  If  is the set of new perceptions, the revision of the set of facts is done as follows: { O 1, …, O n }     

ArgMAS New York18 Change in Beliefs  New observations lead to change in what the agent should believe.  Because the process of calculating the new warrants is computationally hard we have developed a system to integrate precompiled knowledge in ODeLP to address real time constrains.  Our goal is to avoid recomputing arguments.  A condition is that the precompiled knowledge should be independent from the observations.

ArgMAS New York19 Dialectical Database  The Dialectical Database of a defeasible logic program is a graph from which every dialectical tree can be obtained.  Potential arguments, to be defined next, use schematic rules and are the nodes in this structure.  The arcs in the graph represent the defeat relation among them.  We have developed algorithms for the construction and use of dialectical databases.

ArgMAS New York20 Potential Arguments Def: Let  be a set of defeasible rules. A subset A of  is a potential argument for a literal Q, noted  A, Q  if there is a noncontradictory set of literals  and an instance  of the rules in A such that  , Q  is an argument with respect to program ( ,  ).

ArgMAS New York21 An example poor_perf ( john ). sick ( john ). poor_perf ( peter ). unruly ( peter ). suspend ( X )   responsible ( X ). suspend ( X )  unruly ( X ).  suspend ( X )  responsible ( X ).  responsible ( X )  poor_perf ( X ). responsible ( X )  good_perf ( X ). responsible ( X )  poor_perf ( X ), sick ( X ).

ArgMAS New York22 Some Potential arguments B 1 ={ suspend ( X )   responsible ( X ).} B 2 ={ suspend ( X )   responsible ( X ).,  responsible ( X )  poor_perf ( X ).} B 3 ={  suspend ( X )  responsible ( X ).} B 4 ={  suspend ( X )  responsible ( X )., responsible ( X )  good_perf ( X ).} B 5 ={  suspend ( X )  responsible ( X )., responsible ( X )  poor_perf ( X ), sick ( X ).} C 1 ={ responsible ( X )  good_perf ( X ).} C 2 ={  responsible ( X )  poor_perf ( X ).} C 3 ={  responsible ( X )  poor_perf ( X ), sick ( X ).}

ArgMAS New York23 Graph for the DD C1C1 C2C2 C3C3 B3B3 B4B4 B5B5 B1B1 B2B2 The defeat relation among potential arguments (proper and blocking) is also recorded.

ArgMAS New York24 ODeLP-based agent architecture Dialectical base ODeLP inference engine Updating mechanism perceptions queries answers Observations Defeasible rules

ArgMAS New York25 Conclusions  Solid theoretical foundations for agent design should be based on proper formalisms for KR&R.  Real time issues are critical when modeling agent interaction in a MAS setting.  Dialectical databases could help deal with these constrains.  Defeasible Logic Programming: An Argumentative Approach, A. J. García, G.R. Simari, Theory and Practice of Logic Programming. Vol 4(1) pp , 2004.

ArgMAS New York26 Work in Progress  Extending the analysis of ODeLP properties.  Complexity analysis of the ODeLP system.  Implementing applications wich use ODeLP as the knowledge representation and reasoning formalism.