Download presentation
Presentation is loading. Please wait.
Published byValentine Francis Modified over 8 years ago
1
October 19th, 2007L. M. Pereira and A. M. Pinto1 Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial Intelligence Universidade Nova de Lisboa
2
October 19th, 2007L. M. Pereira and A. M. Pinto2 Approved Models for Normal Logic Programs Motivation Notation The Argumentation Perspective Our Argumentation Program Layering Collaborative Argumentation Properties Conclusions and Future Work
3
October 19th, 2007L. M. Pereira and A. M. Pinto3 Motivation Generalize the Argumentation Perspective to all Normal Logic Programs (NLP) by permitting inconsistency removal Allow revising arguments by Reductio ad Absurdum (RAA) Identify the 2-valued complete, consistent, and most skeptical models of any NLP Identify those models respecting the layered stratification of a program
4
October 19th, 2007L. M. Pereira and A. M. Pinto4 Notation and Example An NLP is a set of rules of the form h ← b 1,..., b n, not c 1,..., not c m 'not' denotes default negation Ex: intend_to_invade ← iran_will_have_WMD iran_will_have_WMD ← not intend_to_invade An argument A is a set of negative hypotheses (default literals). Above, the argument {not intend_to_invade} attacks itself, i.e., leads to the conclusion intend_to_invade, and so cannot be accepted This program has no Stable Models
5
October 19th, 2007L. M. Pereira and A. M. Pinto5 The Argumentation Perspective Though {not intend_to_invade} cannot be accepted, by applying RAA in a 2-valued setting, its contrary intend_to_invade must be true For 2-valued completeness and consistency iran_will_have_WMD is false In general, using an RAA-inclusive Argumentation Perspective, how to specify and find 2-valued complete, consistent and most skeptical models?
6
October 19th, 2007L. M. Pereira and A. M. Pinto6 Received Wisdom Classically, an Admissible Argument is such that: it does not attack itself it counter-attacks all arguments attacking it Dung's Preferred Extensions are set-inclusion Maximal Admissible Arguments In general, Preferred Extensions are 3-valued in the example above the only Preferred Extension is the empty argument {}, yielding a 3-valued model whose literals are all undefined There are no 2-valued Classical Arguments for all NLPs !
7
October 19th, 2007L. M. Pereira and A. M. Pinto7 Our Argumentation We wish to provide an Argumentation Perspective where all NLPs have a 2-valued semantics based on a 2-valued Argument Dung's 2-valued Arguments for NLPs correspond exactly to their Stable Models (SMs) By completing Dung's Arguments via RAA, we obtain conservative 2-valued extensions for the SMs of any NLP
8
October 19th, 2007L. M. Pereira and A. M. Pinto8 Approved Models (AMs) Our approach allows adding positive literals as argument hypotheses, but only insofar as to settle RAA application Positive hypotheses resolve the Odd Loops Over Negation (OLONs) addressed by RAA. Similarly, they resolve the Infinite Chains Over Negation (ICONs) too Intuitively, AMs are 2-valued, maximize default literals and minimally add positive literals so as to be complete AMs without positive literals are the SMs
9
Top-down querying When top-down querying we can detect OLONs ”on- the-fly” and resolve them with RAA SM cannot employ top-down query procedures because the semantics is not Relevant, but our extension to SM permits them because it is so A query literal is supported by the arguments found in its top-down derivation Relevancy of AM guarantees that any supporting arguments are extendable to a complete model
10
October 19th, 2007L. M. Pereira and A. M. Pinto10 ICONs An ICON: p(X) ← p(s(X)) p(X) ← not p(s(X)) Ground version: p(0) ← p(s(0)) p(0) ← not p(s(0)) p(s(0)) ← p(s(s(0))) p(s(0)) ← not p(s(s(0)))... Approved Models (AMs): {p(X)} Ground Approved Models: {p(0), p(s(0)), p(s(s(0))),...} This program has no Stable Models!
11
October 19th, 2007L. M. Pereira and A. M. Pinto11 Program Layering Example: d ← not c c ← not b b ← not a a ← not a Approved Models (the first is an RSM) : {a,c} {a,b,d} – Given a, then b is false in the WFM There are no Stable Models The Approved Models do not necessarily respect the Layering (≠ from stratification) Respect of Layering is an optional further requirement The complying Approved Models are the Revised Stable Models
12
October 19th, 2007L. M. Pereira and A. M. Pinto12 Program Layering WFM = Program division P // I by interpretation I: remove from P rules with not a in body, where a I remove from bodies of remaining rules positive literals a I M respects the Layering of P iff given some a M let L={b M: b is in the call-graph of a but not vice-versa}; then a is True or Undefined in the WFM of P // L
13
October 19th, 2007L. M. Pereira and A. M. Pinto13 Collaborative Argumentation Collaborative Argumentation caters for consensus arguments wrt an NLP Our approach enables it, e.g.: merge arguments into one – possibly self-attacking build AMs from it by non-deterministically r evising (to positive) negative hypotheses leading to self- attacks An AM is found when a negative maximal and positive minimal argument is reached
14
October 19th, 2007L. M. Pereira and A. M. Pinto14 Properties AMs are consistent 2-valued completions of Preferred Extensions AM existence is guaranteed for NLPs AM is Relevant (bonus: and Cumulative) Layer respecting AMs are the Revised SMs AMs, RSMs and SMs coincide on programs with neither OLONs nor ICONs
15
October 19th, 2007L. M. Pereira and A. M. Pinto15 Conclusions and Future Work Argumentation approach provides general flexible framework The framework can adumbrate seamlessly other cases of inconsistency, namely arising from Integrity Constraints and Explicit Negation, and thus encompass (collaborative) Belief Revision Results could be generalized to Argumentation settings not specific to Logic Programs, keeping to the Occam precept, i.e., skepticism maximizing negative assumptions with the help of minimal positive ones
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.