Download presentation
Presentation is loading. Please wait.
1
Reductio ad Absurdum Argumentation in Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto CENTRIA – Centro de Inteligência Artificial, UNL Lisbon, Portugal ArgNMR’07 Luís Moniz Pereira Alexandre Miguel Pinto (lmp|amp)@di.fct.unl.pt May 14th, 2007 Tempe, Arizona
2
Outline Outline Background and Motivation Revision Complete Scenarios Stable Models and Revision Complete Scenarios Collaborative Argumentation Conclusions and Future Work
3
Background (ground) Normal Logic Program P: set of rules of the form (n,m 0) h b 1, b 2,..., b n, not c 1, not c 2,..., not c m Motivation In Stable Models (SM) semantics a Normal Logic Program (NLP) not always has a semantics (at least one model) If several NLPs are put together (joining KBs) the resulting NLP may not have any SM. Ex: travel not mountain mountain not beach beach not travel How to ensure that every NLP has at least one 2-valued model?
4
Revision Complete Scenarios Classically, a scenario is a Horn theory P H, where H is a set of negative (default negated) hypotheses Consider NLPs as argumentation systems Take one set H - of negative hypotheses and draw all possible conclusions from P H -, i.e., calculate the least model of P H - – least( P H - ) If contradictions, ie. pairs {not_L, L}, arise in least( P H - ) : Revise the initial set H - of negative hypotheses by removing one negative hypothesis L such that {not_L, L} ⊆ least( P H - ) Repeat until there are no contradictions in least( P H - ) Add as positive hypotheses to H + the positive literals needed to ensure 2-valued completeness of least( P H ), where H = H - H +
5
Revision Complete Scenarios A Revision Complete Scenario is a Horn theory P H, where H = H + H - is a set of hypotheses, positive and negative H - is a Weakly Admissible set of negative hypotheses, i.e., every evidence E= {not L 1, not L 2,..., not L n } attacking H - is counter-attacked by P H - E H + are the non-redundant and unavoidable positive hypotheses needed to ensure 2-valued completeness and consistency of the model for P H H + is non-redundant iff there is no h + in H + already derived by the remaining hypotheses, i.e, P H \ {h + } |-/- h + H + is unavoidable iff for every h + in H +, h + is indispensible to guarantee that P H is consistent, i.e., least(P H \ {h + } {not h + }) is inconsistent – contains a pair {not_L, L}
6
An example: P = travel not mountain mountain not beach beach not travel H- = {not mountain, not beach, not travel} H + = least(P H) = {not mountain, not beach, not travel, mountain, beach, travel} Select one L such that least(P H) {L, not L}: L = mountain Remove not mountain from H - H- = {not beach, not travel} least(P H) = {not beach, not travel, mountain, beach} Select one L such that least(P H) {L, not L}: L = beach Remove not beach from H - H- = {not travel} least(P H) = {not travel, beach}, which is consistent but not 2- valued complete We complete the scenario by adding the positive hyposthesis ‘mountain’ to H+ H=H- H+={not travel} {mountain}={not travel, mountain} H + = {mountain} least(P H) = {not travel, beach, mountain} is consistent and 2-valued complete The other 2 alternative scenarios ({not beach, mountain, travel} and {not mountain, beach, travel}) are simmetrical to this one
7
Stable Models and Revision Complete Scenarios Stable Models and Revision Complete Scenarios Every Stable Model of a NLP P is the Least Model of some Revision Complete Scenario P H, where H = H + H -, and H + = Stable Models do not exist for every NLP, but Revision Complete Scenarios do The least models of Revision Complete Scenarios are the Revised Stable Models of the NLP
8
Collaborative Argumentation Classically, argumentation is viewed as a battle between opponents where each one’s hypotheses attack the others’ Our approach facilitates collaborative argumentation in the sense that it provides a method for finding a consensus solution of two (or more) opposing arguments. This is done by Merging the different arguments H 1, H 2,..., H n into a single H Revising the negative hypotheses needed to eliminate inconsistencies in P H Adding the unavoidable and non-redundant positive hypotheses needed to ensure 2-valued completeness
9
Conclusions Revision Complete Scenarios extend the Stable Models semantics guaranteeing existence of a 2-valued complete and consistent model Stable Models can be viewed as the result of an iterative process of belief revision (revising hypotheses from negative to positive) Revision Complete Scenarios provide a framework for Collaborative Argumentation Future Work Extend this approach to Generalized Logic Programs Extend this argumentation approach to rWFS Integration with other Belief Revision methods
10
Further examples a not aa is unavoidable b aa not a b is redundant, a is non-redundant
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.