Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.

Slides:



Advertisements
Similar presentations
CSE391 – 2005 NLP 1 Planning The Planning problem Planning with State-space search.
Advertisements

Causal-link Planning II José Luis Ambite. 2 CS 541 Causal Link Planning II Planning as Search State SpacePlan Space AlgorithmProgression, Regression POP.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Computational Models for Argumentation in MAS
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
Closure Properties of CFL's
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
SECTION 21.5 Eilbroun Benjamin CS 257 – Dr. TY Lin INFORMATION INTEGRATION.
Copyright © Cengage Learning. All rights reserved.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Chapter 12: Expert Systems Design Examples
Chapter 4 Normal Forms for CFGs Chomsky Normal Form n Defn A CFG G = (V, , P, S) is in chomsky normal form if each rule in G has one of.
A Semantic Characterization of Unbounded-Nondeterministic Abstract State Machines Andreas Glausch and Wolfgang Reisig 1.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
The Relational Model System Development Life Cycle Normalisation
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
CS5371 Theory of Computation
Artificial Intelligence Chapter 11: Planning
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
Firewall Policy Queries Author: Alex X. Liu, Mohamed G. Gouda Publisher: IEEE Transaction on Parallel and Distributed Systems 2009 Presenter: Chen-Yu Chang.
Proof System HY-566. Proof layer Next layer of SW is logic and proof layers. – allow the user to state any logical principles, – computer can to infer.
On-the-fly Model Checking from Interval Logic Specifications Manuel I. Capel & Miguel J. Hornos Dept. Lenguajes y Sistemas Informáticos Universidad de.
Validating Streaming XML Documents Luc Segoufin & Victor Vianu Presented by Harel Paz.
Boolean Unification EE219B Presented by: Jason Shamberger March 1, 2000.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Let remember from the previous lesson what is Knowledge representation
Lesson 6. Refinement of the Operator Model This page describes formally how we refine Figure 2.5 into a more detailed model so that we can connect it.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Copyright © Cengage Learning. All rights reserved.
Randomized Algorithms - Treaps
Artificial Intelligence Introduction (2). What is Artificial Intelligence ?  making computers that think?  the automation of activities we associate.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Review Session #2. Outline Logarithms Solving Systems of Equations Factoring and Roots.
1 Automatic Refinement and Vacuity Detection for Symbolic Trajectory Evaluation Orna Grumberg Technion Haifa, Israel Joint work with Rachel Tzoref.
Planning, page 1 CSI 4106, Winter 2005 Planning Points Elements of a planning problem Planning as resolution Conditional plans Actions as preconditions.
CSCI 115 Chapter 7 Trees. CSCI 115 §7.1 Trees §7.1 – Trees TREE –Let T be a relation on a set A. T is a tree if there exists a vertex v 0 in A s.t. there.
Formal Models in AGI Research Pei Wang Temple University Philadelphia, USA.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Formal Specification of Intrusion Signatures and Detection Rules By Jean-Philippe Pouzol and Mireille Ducassé 15 th IEEE Computer Security Foundations.
Chapter 10 Normalization Pearson Education © 2009.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
Copyright © Zeph Grunschlag, Induction Zeph Grunschlag.
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Natural Deduction System for First Order Logic Student: Wei Lei Instructor: W. M. Farmer Department of Computing and Software McMaster University, Hamilton,
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
Intro to Planning Or, how to represent the planning problem in logic.
Bc. Jozef Lang (xlangj01) Bc. Zoltán Zemko (xzemko01) Increasing power of LL(k) parsers.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
© D. Wong Functional Dependencies (FD)  Given: relation schema R(A1, …, An), and X and Y be subsets of (A1, … An). FD : X  Y means X functionally.
Belief dynamics and defeasible argumentation in rational agents M. A. Falappa - A. J. García G. R. Simari Artificial Intelligence Research and Development.
Proof And Strategies Chapter 2. Lecturer: Amani Mahajoub Omer Department of Computer Science and Software Engineering Discrete Structures Definition Discrete.
Mehdi Kargar Department of Computer Science and Engineering
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
B-Trees B-Trees.
Rule Induction for Classification Using
Copyright © Cengage Learning. All rights reserved.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Algorithms & Pseudocode
B-Trees.
Implementation of Learning Systems
Presentation transcript:

Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD N ACIONAL DEL S UR ARGENTINA

[NMR 2004]2 Outline  Motivation  The Argumentation Framework  Actions and Defeasible Reasoning  Examples  Conclusions

[NMR 2004]3 Deafeasible Logic Programming: DeLP A Defeasible Logic Program ( dlp ) is a set of facts, strict and defeasible rules denoted  = ( ,  ) bird ( X )  chicken ( X ) chicken ( tina ) bird ( X )  penguin ( X ) penguin ( opus )  flies ( X )  penguin ( X ) scared ( tina ) flies ( X )  bird ( X )  flies ( X )  chicken ( X ) flies ( X )  chicken ( X ), scared ( X ) Strict Rules Facts Defeasible Rules  

[NMR 2004]4 Defeasible Argumentation Def: Let L be a literal and   ( ,  ) be a program.  , L  is an argument, for L, if  is a set of rules in  such that: 1)There exists a defeasible derivation of L from    ; 2)The set    is non contradictory; and 3)  is minimal, that is, there is no proper subset  of  such that  satisfies 1) and 2).

buy_shares ( X )  good_price ( X )  buy_shares ( X )  good_price ( X ), risky ( X ) risky ( X )  in_fusion ( X, Y ) risky ( X )  in_debt ( X )  risky ( X )  in_fusion ( X, Y ), strong ( Y ) good_price ( acme ) in_fusion ( acme, estron ) strong ( estron )  buy_shares ( acme ) good_price ( acme ) risky ( acme ) good_price ( acme ) in_fusion ( acme, enron )  {  buy_shares ( acme )  good_price ( acme ), risky ( acme ), risky ( acme )  in_fusion ( acme, enron )},  buy_shares ( acme ) 

 , Q  is a subargument of  , L  if  is an argument for Q and     buy_shares ( acme ) good_price ( acme ) risky ( acme ) good_price ( acme ) in_fusion ( acme, enron )  = {  buy_shares ( acme )  good_price ( acme ), risky ( acme ), risky ( acme )  in_fusion ( acme, enron ) }  = { risky ( acme )  in_fusion ( acme, enron ) }

Counter-argument  risky ( acme ) in_fusion ( acme,estron ) strong ( estron ) in_fusion ( acme,estron ) strong ( estron )  buy_shares ( acme ) good_price ( acme ) risky ( acme ) good_price ( acme ) in_fusion ( acme,estron )   { risky ( acme ),  risky ( acme ) } is a contradictory set

[NMR 2004]8 An argument  , P  is a defeater for  , L  if  , P  is a counter-argument  , L  that atacks a subargument  , Q  de  , L  and one of the following conditions holds: (a)  , P  is better than  , Q  ( proper defeater ), or (b)  , P  is not comparable to  , Q  ( blocking )  L  P Q  Defeaters

00 11 22 33 22 33 44 33 44 55 11 22 Dialectical Tree Given a program  = ( ,  ), a literal L will be warranted if there is an argument  , L  built from , and that argument has a dialectical tree whose root node is marked U. That is, argument  , L  is an argument for which all the possible defeaters have been defeated. We will say that  is a warrant for L.   , L 

 *  , L  Marking of a Dialectical Tree  U U D U U U U U D D D D

[NMR 2004]11 Answers in DeLP Given a program  = ( ,  ), and a query for L the posible answers are: YES, if L is warranted. NO, if  L is warranted. UNDECIDED, if neither L nor  L are warranted. UNKNOWN, if L is not in the language of the program.

Actions

[NMR 2004]13 Restricting DeLP  In this work we will restrict the program that represents the knowledge base  to a set  of facts and a set  of defeasible rules.  We will denote the knowledge base      The restriction of the non-defeasible part of  to facts, eliminating strict rules, is motivated in the desire of simplifying at this stage of the research the handling of changes in .

[NMR 2004]14 Actions  An action A will be an ordered triple  X, P, C , where X is a consistent set of literals representing consequences of executing A, P is a set of literals representing preconditions for A, C is a set of constrains of the form  L, where L is a literal.  Actions will be denoted: { X 1, …, X n }  { P 1, …, P m }, not { C 1, …, C k } where not { C 1, …, C k } means { not C 1, …, not C k } and not C i means C i is not warranted. A

[NMR 2004]15 Actions  Given A   X, P, C , the condition that must be satisfied before it could be executed contains two parts: P, which mentions the literals that must be warranted and C, which mentions the literals that must not be warranted.  Notice that there are three ways of satisfying the constrains in C  not { C 1, …, C k } :   C i is warranted, or  C i is undecided, or  C i is unknown leading to a more expressive representation.

[NMR 2004]16 Actions For example, using this form of specification, it is possible to express conditions such as the following: “If did not rain today and it is unknown when it might rain, then water the garden” Represented as: { water_garden ( today ) }  {  rain ( today )}, not { rain ( X )} watergarden

[NMR 2004]17 Actions  Formally, let    be an agent’s knowledge base. Let  be the set of actions available to this agent. An action A   X, P, C  in  is applicable if every precondition P i in P has a warrant built from  and every constraint C j in C fails to be warranted. The effect of an applicable action A   X, P, C  is the revision of  by X, that is to say: Therefore, revision will consist in removing any literal in  that is complementary of a literal in X and then adding X to the resulting set. { X 1, …, X n } *X  **X  *  * X   *  (   X )  X 

[NMR 2004]18 Actions  For example,   { a, b, c, d }, and   {( p  b ), ( q  r ), ( r  d ), (  r  s ), ( s  v )}, (  s  a,b ), ( w  b ), (  w  b,c )}  contains only the action A {  a, d, x }  { a, p, q }, not { t,  t, w } if the action is executed the set of facts will become:   { b, c,  a, d, x }

[NMR 2004]19 Regression Planning The following would be a näive approach: Repeat –Select an action A   X, P, C  such that X  ( G – w ( G ))   and X  G  . –Recompute G as ( G – X )  P. Until G  w ( G ) –

Some interesting problems

[NMR 2004]21 Argument Clipping G  { a } A 1 : { a }  { b, c }, not {} A 2 : {  x, b }  { e }, not {}   { e, x }    {( c  x )} Action A 1 achieves a and needs b and c warranted. Argument   { c  x } is a warrant for c. Action A 2 achieves b and  x and G becomes { e, c }. Both literals are warranted from (   ), but the sequence [ A 2, A 1 ] does not do the job because A 2 erases x.

[NMR 2004]22 Enabling a Defeater G  { a } A 1 : { a }  { b, c }, not {} A 2 : {  x, b }  { e }, not {}   { e, x, d }    {( c  d ), (  c   x ) } Action A 1 achieves a and needs b and c warranted. Argument   { c  d } is a warrant for c. Action A 2 achieves b and  x and G becomes { e, c } Both literals are warranted from (   ), but the sequence [ A 2, A 1 ] doesn’t work because A 2 creates a defeater for .

[NMR 2004]23 Disabling a Defeater G  { a } A 1 : { a }  { b, c }, not {} A 2 : {  x, b }  { e }, not {}   { e, x, g }    {( c  d ), ( d  e ), (  d  e, f ), ( f  g ), (  f  x ) } Action A 1 achieves a and needs b and c warranted. Argument   {( c  d ), ( d  e )} is a warrant for c, since even though   {(  d  e, f ), ( f  g )} is a defeater for , argument   {(  f  x )} is a defeater for  and reinstates .. To obtain b we select action A 2. The effect of A 2 is {  x, b } and G becomes { e, c } both literals are warranted from (   ), but the sequence [ A 2, A 1 ] does not do the job since A 2 removes x and  could not be built and  becomes undefeated, therefore  becomes defeated and c is non longer warranted.

[NMR 2004]24 Given  = ( ,  ), and   0, L 0  an argument obtained from . An argumentation line for   0, L 0  is a sequence of arguments obtained from , denoted  = [   0, L 0 ,   1, L 1 , …] where each element in the sequence   i, h i , i > 0 is a defeater for   i -1, h i- 1 . 00 L0L0 11 L1L1 Argumentation Line 22 L2L2 33 L3L3 44 L4L4

[NMR 2004]25 Given an argumentation line  = [   0, L 0 ,   1, L 1 , …], the subsequence  S = [   0, L 0 ,   2, L 2 , …] contains supporting arguments and  I = [   1, L 1 ,   3, L 3 , …] are interfering arguments. Argumentation Line 00 L0L0 11 L1L1 22 L2L2 33 L3L3 44 L4L4 SS

[NMR 2004]26 Argumentation Line 00 L0L0 11 L1L1 22 L2L2 33 L3L3 44 L4L4 II Given an argumentation line  = [   0, L 0 ,   1, L 1 , …], the subsequence  S = [   0, L 0 ,   2, L 2 , …] contains supporting arguments and  I = [   1, L 1 ,   3, L 3 , …] are interfering arguments.

[NMR 2004]27 Two Problems 00 L0L0 11 L1L1 22 L2L2 33 L3L3 44 L4L4 1)An action deletes any of the literals used in supporting arguments of a line the warrants L 0. 2)An action could add literals that aid in the construction of new defeaters for the supporting arguments.  Solution: protect all the literals used in the supporting part of the argumentation line, and ensure that no new defeaters for the supporting arguments could be built.

[NMR 2004]28 Protecting Warrants  Let  = ( ,  ) be the agent’s knowledge base, G the agent’s goal, and [ A 1, A 2, …, A n ] the actions selected by the regression planner. Let {   1, L 1 , …,   k, L k  } be the set of warrants  i for L k that are assumed to be warranted for the selection of the actions [ A 1, A 2, …, A n ]. We will define: Protect   i  1.. k Weak ( SuppArg (   i, L i  )) and PossAttack   i  1.. k Facts ( SuppArg (   i, L i  ))

[NMR 2004]29 Protecting Warrants Repeat –Select an action A   X, P, C  such that 1. X  ( G – w ( G ))   2. X  Protect   3. There is no new undefeated defeater for a warrant for a member of PossAttack from   X –Recompute G as ( G – X )  P –Update Protect and PossAttack accordingly Until G  w ( G ) –

[NMR 2004]30 Action Selection G  { a } A 1 : { a }  { b, c }, not {} A 2 : {  x, b }  { e }, not {} A 3 : { c }  { e }, not {}   { e, x }    {( c  x )} Action A 1 achieves a and needs b and c warranted. Argument   {( c  x )} is a warrant for c, so Protect  { x } To obtain b action A 2 is considered but discarded because {  x, b }  { x }  { x }. Therefore, no plan is found, but a plan exists: [ A 2, A 3, A 1 ] If the planner discards an action because deletes a protected literal necessary for the warrant of a literal c, the planner could search another way of warranting c and insert a subsidiary plan for that.

[NMR 2004]31 Conclusions  We have introduced a way in which argumentation can be used in the definition of actions and the combination of those actions to form a plan.  We have explored how this new approach can be integrated in a simple planning algorithm.  The use of defeasible argumentation in progression planning is almost straightforward.  However, regression planning becomes rapidly more difficult.  We are working in the implementation of a planner based on the framework described.

Questions?