LP and Non-Monotonicity LP includes a non-monotonic form of default negation not L is true if L cannot (now) be proven This feature is used for representing.

Slides:



Advertisements
Similar presentations
İlk sayfaya geç Many Birds Fly, Some dont Elaborations on a Quantification Approach to the Problem of Qualification.
Advertisements

Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Logic Programming Automated Reasoning in practice.
Knowledge & Reasoning Logical Reasoning: to have a computer automatically perform deduction or prove theorems Knowledge Representations: modern ways of.
Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga,
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
1 Inductive Equivalence of Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics ILP
Hoare’s Correctness Triplets Dijkstra’s Predicate Transformers
CYK Parser Von Carla und Cornelia Kempa. Overview Top-downBottom-up Non-directional methods Unger ParserCYK Parser.
Commonsense Reasoning and Argumentation 14/15 HC 10: Structured argumentation (3) Henry Prakken 16 March 2015.
The coherence principle Generalizing WFS in the same way yields unintuitive results: pacifist(X)  not hawk(X) hawk(X)  not pacifist(X) ¬ pacifist(a)
Default Reasoning the problem: in FOL, universally-quantified rules cannot have exceptions –  x bird(x)  can_fly(x) –bird(tweety) –bird(opus)  can_fly(opus)
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
The ancestor problem For the ancestor problem we have the following rules in AnsProlog: anc(X,Y)
Explanation and Diagnoses Thomas Thomas & Jansen Orfan.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Permits to talk not just about the external world,
Simplifying CFGs There are several ways in which context-free grammars can be simplified. One natural way is to eliminate useless symbols those that cannot.
Knowledge Representation and Reasoning (KR): A vibrant subfield of AI Jia You.
Everything You Need to Know (since the midterm). Diagnosis Abductive diagnosis: a minimal set of (positive and negative) assumptions that entails the.
The Problem of Induction Reading: ‘The Problem of Induction’ by W. Salmon.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Cs774 (Prasad)L7Negation1 Negation by Failure
1 Logic Aided Lamarckian Evolution Evelina Lamma (1), Fabrizio Riguzzi (2), Luís Moniz Pereira (3) (1) DEIS, University of Bologna, Italy (2) DI, University.
Formal Logic Proof Methods Direct Proof / Natural Deduction Conditional Proof (Implication Introduction) Reductio ad Absurdum Resolution Refutation.
A Semantic Characterization of Unbounded-Nondeterministic Abstract State Machines Andreas Glausch and Wolfgang Reisig 1.
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections.
The idea of completion In LP one uses “if” but mean “iff” [Clark78] This doesn’t imply that -1 is not a natural number! With this program we mean: This.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
Logic Program Revision The problem: The problem: –A LP represents consistent incomplete knowledge; –New factual information comes. –How to incorporate.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
CSE115/ENGR160 Discrete Mathematics 01/31/12 Ming-Hsuan Yang UC Merced 1.
CSE115/ENGR160 Discrete Mathematics 02/01/11
Extended LPs In Normal LPs all the negative information is implicit. Though that’s desired in some cases (e.g. the database with flight connections), sometimes.
Technion 1 (Yet another) decision procedure for Equality Logic Ofer Strichman and Orly Meir Technion.
Knowledge Evolution Up to now we have not considered evolution of the knowledge In real situations knowledge evolves by: –completing it with new information.
Scientific Thinking - 1 A. It is not what the man of science believes that distinguishes him, but how and why he believes it. B. A hypothesis is scientific.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
NONMONOTONIC LOGIC AHMED SALMAN MALIK. OVERVIEW Monotonic Logic Nonmonotonic Logic Usage and Applications Comparison with other forms of logic Related.
1 Knowledge Based Systems (CM0377) Lecture 12 (Last modified 2nd May 2002)
CSNB234 ARTIFICIAL INTELLIGENCE
1 Knowledge Based Systems (CM0377) Lecture 13 (Last modified 2nd May 2002)
Learning by Answer Sets Chiaki Sakama Wakayama University, Japan Presented at AAAI Spring Symposium on Answer Set Programming, March 2001.
Theory Revision Chris Murphy. The Problem Sometimes we: – Have theories for existing data that do not match new data – Do not want to repeat learning.
Combining Answer Sets of Nonmonotonic Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics.
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.
Preference Revision via Declarative Debugging Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden EPIA’05, Covilhã,
11/8/2015 Nature of Science. 11/8/2015 Nature of Science 1. What is science? 2. What is an observation? 3. What is a fact? 4. Define theory. 5. Define.
For Monday Finish chapter 19 No homework. Program 4 Any questions?
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
1 Introduction to Abstract Mathematics Chapter 2: The Logic of Quantified Statements. Predicate Calculus Instructor: Hayk Melikya 2.3.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
NMR98 - Logic Programming1 Learning with Extended Logic Programs Evelina Lamma (1), Fabrizio Riguzzi (1), Luís Moniz Pereira (2) (1)DEIS, University of.
Inverse Entailment in Nonmonotonic Logic Programs Chiaki Sakama Wakayama University, Japan.
1 Consistent-based Diagnosis Yuhong YAN NRC-IIT. 2 Main concepts in this paper  (Minimal) Diagnosis  Conflict Set  Proposition 3.3  Corollary 4.5.
Abduction CIS308 Dr Harry Erwin. Contents Definition of abduction An abductive learning method Recommended reading.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Belief dynamics and defeasible argumentation in rational agents M. A. Falappa - A. J. García G. R. Simari Artificial Intelligence Research and Development.
On Abductive Equivalence Katsumi Inoue National Institute of Informatics Chiaki Sakama Wakayama University MBR
October 19th, 2007L. M. Pereira and A. M. Pinto1 Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial.
Normality and Faults in Logic-Based Diagnosis
Answer Set Programming
Presentation transcript:

LP and Non-Monotonicity LP includes a non-monotonic form of default negation not L is true if L cannot (now) be proven This feature is used for representing incomplete knowledge: With incomplete knowledge, assume hypotheses, and jump to conclusions. If (later) the conclusions are proven false, withdraw some hypotheses to regain consistency.

Typical example All birds fly. Penguins are an exception: flies(X)  bird(X), not ab(X).bird(a) . ab(X)  penguin(X). If later we learn penguin(a): –Add: penguin(a). –Goes back on the assumption not ab(a). –No longer concludes flies(a). This program concludes flies(a), by assuming not ab(a).

LP representing a static world The work on LP allows the (non- monotonic) addition of new knowledge. But: –What we have seen so far does not consider this evolution of knowledge LPs represent a static knowledge of a given world in a given situation. The issues of how to add new information to a logic program wasn’t yet addressed.

Knowledge Evolution Up to now we have not considered evolution of the knowledge In real situations knowledge evolves by: –completing it with new information –changing it according to the changes in the world itself Simply adding the new knowledge possibly leads to contradiction In many cases a process for restoring consistency is desired

Revision and Updates In real situations knowledge evolves by: –completing it with new information (Revision) –changing it according to the changes in the world itself (Updates) These forms of evolution require a differentiated treatment. Example: –I know that I have a flight booked for London (either for Heathrow or for Gatwick). Revision: I learn that it is not for Heathrow I conclude my flight is for Gatwick Update: I learn that flights for Heathrow were canceled Either I have a flight for Gatwick or no flight at all

AGM Postulates for Revision For revising a logical theory T with a formula F, first modify T so that it does not derive ¬F, and then add F. The contraction of T by a formula F, T - (F), should obey: 1.T - (F) has the same language as T 2.Th(T - (F))  Th(T) 3.If T |≠ F then T - (F) = T 4.If |≠ F then T - (F) |≠ F 5.Th(T)  Th(T - (F)  {F}) 6.If |= F ↔ G then Th(T - (F)) = Th(T - (G)) 7.T - (F) ∩ T - (G)  T - (F  G) 8.If T - (F  G) |≠ F then T - (F  G)  T - (F)

Epistemic Entrenchment The question in general theory revision is how to change a theory so that it obeys the postulates? What formulas to remove and what formulas to keep? In general this is done by defining preferences among formulas: some can and some cannot be removed. Epistemic Entrenchment: some formulas are “more believed” than others. This is quite complex in general theories. In LP, there is a natural notion of “more believed”

Logic Programs Revision The problem: –A LP represents consistent incomplete knowledge; –New factual information comes. –How to incorporate the new information? The solution: –Add the new facts to the program –If the union is consistent this is the result –Otherwise restore consistency to the union The new problem: –How to restore consistency to an inconsistent program?

Simple revision example (1) P:flies(X)  bird(X), not ab(X).bird(a) . ab(X)  penguin(X). We learn penguin(a). P  {penguin(a)} is consistent. Nothing more to be done. We learn instead ¬flies(a). P  {¬flies(a)} is inconsistent. What to do? Since the inconsistency rests on the assumption not ab(a), remove that assumption (e.g. by adding the fact ab(a), or forcing it undefined with ab(a)  u) obtaining a new program P’. If an assumption supports contradiction, then go back on that assumption.

Simple revision example (2) P:flies(X)  bird(X), not ab(X).bird(a) . ab(X)  penguin(X). If later we learn flies(a). P’  {flies(a)} is inconsistent. The contradiction does not depend on assumptions. Cannot remove contradiction! Some programs are non-revisable.

What to remove? Which assumptions should be removed? normalWheel  not flatTyre, not brokenSpokes. flatTyre  leakyValve. ¬ normalWheel  wobblyWheel. flatTyre  puncturedTube. wobblyWheel . –Contradiction can be removed by either dropping not flatTyre or not brokenSpokes –We’d like to delve deeper in the model and (instead of not flatTyre) either drop not leakyValve or not puncturedTube.

Revisables Revisables = not {leakyValve, punctureTube, brokenSpokes} Revisions in this case are {not lv}, {not pt}, and {not bs} Solution: –Define a set of revisables: normalWheel  not flatTyre, not brokenSpokes. flatTyre  leakyValve. ¬ normalWheel  wobblyWheel. flatTyre  puncturedTube. wobblyWheel .

Integrity Constraints For convenience, instead of: ¬normalWheel  wobblyWheel we may use the denial:   normalWheel, wobblyWheel ICs can be further generalized into: L 1  …  L n  L n+1  …  L m where L i s are literals (possibly not L).

ICs and Contradiction In an ELP with ICs, add for every atom A:   A, ¬A A program P is contradictory iff P   where  is the paraconsistent derivation of SLX

Algorithm for 3-valued revision Find all derivations for , collecting for each one the set of revisables supporting it. Each is a support set. Compute the minimal hitting sets of the support sets. Each is a removal set. A revision of P is obtained by adding {A  u: A  R} where R is a removal set of P.

(Minimal Hitting Sets) H is a hitting set of S = {S 1,…S n } iff –H ∩ S 1 ≠ {} and … H ∩ S n ≠ {} H is a minimal hitting set of S iff it is a hitting set of S and there is no other hitting set of S, H’, such that H’  H. Example: –Let S = {{a,b},{b,c}} –Hitting sets are {a,b},{a,c},{b},{b,c},{a,b,c} –Minimal hitting sets are {b} and {a,c}.

Example Rev = not {a,b,c}   p, q p  not a. q  not b, r. r  not b. r  not c.  pq not arnot b not c Support sets are: {not a, not b}and {not a, not b, not c}. Removal sets are: {not a} and {not b}.

Simple diagnosis example inv(G,I,0)  node(I,1), not ab(G). inv(G,I,1)  node(I,0), not ab(G). node(b,V)  inv(g1,a,V). node(a,1). ¬node(b,0). %Fault model inv(G,I,0)  node(I,0), ab(G). inv(G,I,1)  node(I,1), ab(G). a=1 b0b0 g1 The only revision is: P U {ab(g1)  u} It does not conclude node(b,1). In diagnosis applications (when fault models are considered) 3-valued revision is not enough.

2-valued Revision In diagnosis one often wants the IC: ab(X) v not ab(X)  –With these ICs (that are not denials), 3-valued revision is not enough. A two valued revision is obtained by adding facts for revisables, in order to remove contradiction. For 2-valued revision the algorithm no longer works…

Example In 2-valued revision: –some removals must be deleted; –the process must be iterated.   p.   a.   b, not c. p  not a, not b.  a X p not a not b b not c X The only support is {not a, not b}. Removals are {not a} and {not b}. P U {a} is contradictory (and unrevisable). P U {b} is contradictory (though revisable). But:

Algorithm for 2-valued revision 1Let Revs={{}} 2For every element R of Revs: –Add it to the program and compute removal sets. –Remove R from Revs –For each removal set RS: Add R U not RS to Revs 3Remove non-minimal sets from Revs 4Repeat 2 and 3 until reaching a fixed point of Revs. The revisions are the elements of the final Revs.

Choose {b}. The removal set of P U {b} is {not c}. Add {b, c} to Rev. Choose {b,c}. The removal set of P U {b,c} is {}. Add {b, c} to Rev. Choose {}. Removal sets of P U {} are {not a} and {not b}. Add them to Rev. Example of 2-valued revision   p.   a.   b, not c. p  not a, not b. Rev 0 = {{}} Rev 1 = {{a}, {b}} Choose {a}. P U {a} has no removal sets. Rev 2 = {{b}} Rev 3 = {{b,c}} The fixed point had been reached. P U {b,c} is the only revision. = Rev 4

Revision and Diagnosis In model based diagnosis one has: –a program P with the model of a system (the correct and, possibly, incorrect behaviors) –a set of observations O inconsistent with P (or not explained by P). The diagnoses of the system are the revisions of P  O. This allows to mixed consistency and explanation (abduction) based diagnosis.

Diagnosis Example c1=0 c3=0 c6=0 c7=0 c2=0 0 1 g10 g11 g16 g19 g22 g23

Diagnosis Program Observables obs(out(inpt0, c1), 0). obs(out(inpt0, c2), 0). obs(out(inpt0, c3), 0). obs(out(inpt0, c6), 0). obs(out(inpt0, c7), 0). obs(out(nand, g22), 0). obs(out(nand, g23), 1). Predicted and observed values cannot be different  obs(out(G, N), V1), val(out(G, N), V2), V1  V2. Connections conn(in(nand, g10, 1), out(inpt0, c1)). conn(in(nand, g10, 2), out(inpt0, c3)). … conn(in(nand, g23, 1), out(nand, g16)). conn(in(nand, g23, 2), out(nand, g19)). Value propagation val( in(T,N,Nr), V )  conn( in(T,N,Nr), out(T2,N2) ), val( out(T2,N2), V ). val( out(inpt0, N), V )  obs( out(inpt0, N), V ). Normal behavior val( out(nand,N), V )  not ab(N), val( in(nand,N,1), W1), val( in(nand,N,2), W2), nand_table(W1,W2,V). Abnormal behavior val( out(nand,N), V )  ab(N), val( in(nand,N,1), W1), val( in(nand,N,2), W2), and_table(W1,W2,V).

Diagnosis Example c1=0 c3=0 c6=0 c7=0 c2=0 0 1 g10 g11 g16 g19 g22 g23 Revision are:{ab(g23)}, {ab(c19)}, and {ab(g16),ab(g22)}

Revision and Debugging Declarative debugging can be seen as diagnosis of a program. The components are: –rule instances (that may be incorrect). –predicate instances (that may be uncovered) The (partial) intended meaning can be added as ICs. If the program with ICs is contradictory, revisions are the possible bugs.

Debugging Transformation Add to the body of each possibly incorrect rule r(X) the literal not incorrect(r(X)). For each possibly uncovered predicate p(X) add the rule: p(X)  uncovered(p(X)). For each goal G that you don’t want to prove add:   G. For each goal G that you want to prove add:   not G.

Debugging example a  not b b  not c WFM = {not a, b, not c} b should be false a  not b, not incorrect(a  not b) b  not c, not incorrect(b  not c) a  uncovered(a) b  uncovered(b) c  uncovered(c)   b Revisables are incorrect/1 and uncovered/1 Revision are: {incorrect(b  not c)} {uncovered(c)} BUT a should be false! Add   not a Revisions now are: {inc(b  not c), inc(a  not b)} {unc(c ), inc(a  not b)} BUT c should be true! Add   c The only revision is: {unc(c ), inc(a  not b)}

Deduction, Abduction and Induction In deductive reasoning one derives conclusions based on rules and facts –From the fact that Socrates is a man and the rule that all men are mortal, conclude that Socrates is mortal In abductive reasoning given an observation and a set of rules, one assumes (or abduce) a justification explaining the observation –From the rule that all men are mortal and the observation that Socrates is mortal, assume that Socrates being a man is a possible justification In inductive reasoning, given facts and observations induce rules that may synthesize the observations –From the fact that Socrates (and many others) are man, and the observation that all those are mortal induce that all men are mortal.

Deduction, Abduction and Induction Deduction: an analytic process based on the application of general rules to particular cases, with inference of a result Induction: synthetic reasoning which infers the rule from the case and the result Abduction: synthetic reasoning which infers the (most likely) case given the rule and the result

Abduction in logic Given a theory T associated with a set of assumptions Ab (abducibles), and an observation G (abductive query),  is an abductive explanation (or solution) for G iff:   Ab 2.T   |= G 3.T  G is consistent Usually minimal abductive solutions are of special interest For the notion of consistency, in general integrity constraints are also used (as in revision)

Abduction example It has been observed that wobblyWheel. What are the abductive solutions for that, assuming that abducibles are brokenSpokes, leakyValve and puncturedTube? wobbleWheel  flatTyre. wobbleWheel  brokenSpokes. flatTyre  leakyValve. flatTyre  puncturedTube.

Applications In diagnosis: –Find explanations for the observed behaviour –Abducible are the normality (or abnormality) of components, and also fault modes In view updates –Find extensional data changes that justify the intentional data change in the view –This can be further generalized for knowledge assimilation

Abduction as Nonmonotonic reasoning If abductive explanations are understood as conclusions, the process of abduction is nonmonotonic In fact, abduction may be used to encode various other forms of nonmonotonic logics Vice-versa, other nonmonotonic logics may be used to perform abductive reasoning

Negation by Default as Abduction Replace all not A by a new atom A* Add for every A integrity constraints: A  A*   A, A* L is true in a Stable Model iff there is an abductive solution for the query F Negation by default is view as hypotheses that can be assumed consistently

Defaults as abduction For each rule d: A : B C add the rule C ← d(B), A and the ICs ¬d(B)  ¬ B ¬d(B)  ¬C Make all d(B) abducible

Abduction and Stable Models Abduction can be “simulated” with Stable Models For each abducible A, add to the program: A ← not ¬A ¬A ← not A For getting abductive solutions for G just collect the abducibles that belong to stable models with G I.e. compute stable models after also adding ← not G and then collect all abducible from each stable model

Abduction and Stable Models (cont) The method suggested lacks means for capturing the relevance of abductions made for really proving the query Literal in the abductive solution may be there because they “help” on proving the abductive query, or simply because they are needed for consistency independently of the query Using a combination of WFS and Stable Models may help in this matter.

Abduction as Revision For abductive queries: –Declare as revisable all the abducibles –If the abductive query is Q, add the IC:   not Q –The revision of the program are the abductive solutions of Q.