Contradiction-tolerant TMS

Slides:



Advertisements
Similar presentations
Completeness and Expressiveness
Advertisements

İlk sayfaya geç Many Birds Fly, Some dont Elaborations on a Quantification Approach to the Problem of Qualification.
Introduction to Truth Maintenance Systems A Truth Maintenance System (TMS) is a PS module responsible for: 1.Enforcing logical relations among beliefs.
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Hoare’s Correctness Triplets Dijkstra’s Predicate Transformers
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
Default Reasoning the problem: in FOL, universally-quantified rules cannot have exceptions –  x bird(x)  can_fly(x) –bird(tweety) –bird(opus)  can_fly(opus)
Logic Use mathematical deduction to derive new knowledge.
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
Logic.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Logic Concepts Lecture Module 11.
Copyright © Cengage Learning. All rights reserved.
Testing Hypotheses About Proportions Chapter 20. Hypotheses Hypotheses are working models that we adopt temporarily. Our starting hypothesis is called.
Reasoning Lindsay Anderson. The Papers “The probabilistic approach to human reasoning”- Oaksford, M., & Chater, N. “Two kinds of Reasoning” – Rips, L.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Scientific Thinking - 1 A. It is not what the man of science believes that distinguishes him, but how and why he believes it. B. A hypothesis is scientific.
Let remember from the previous lesson what is Knowledge representation
Proof by Deduction. Deductions and Formal Proofs A deduction is a sequence of logic statements, each of which is known or assumed to be true A formal.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Methods of Proof & Proof Strategies
Fuzzy Logic. Lecture Outline Fuzzy Systems Fuzzy Sets Membership Functions Fuzzy Operators Fuzzy Set Characteristics Fuzziness and Probability.
DECIDABILITY OF PRESBURGER ARITHMETIC USING FINITE AUTOMATA Presented by : Shubha Jain Reference : Paper by Alexandre Boudet and Hubert Comon.
Formal Models in AGI Research Pei Wang Temple University Philadelphia, USA.
Pattern-directed inference systems
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Uncertainty Management in Rule-based Expert Systems
Copyright © Cengage Learning. All rights reserved.
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
CS6133 Software Specification and Verification
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Logic-based TMS Main idea: Represent negation explicitly to permit the IE to express any logical constraint among nodes. Recall: JTMS and ATMS do not allow.
Lecture 2: Proofs and Recursion. Lecture 2-1: Proof Techniques Proof methods : –Inductive reasoning Lecture 2-2 –Deductive reasoning Using counterexample.
Knowledge Based Systems
Chapter 2 Sets and Functions.
Chapter 7. Propositional and Predicate Logic
Introduction to Logic for Artificial Intelligence Lecture 2
Axiomatic Number Theory and Gödel’s Incompleteness Theorems
Knowledge Representation and Reasoning
Chapter 10: Using Uncertain Knowledge
Testing Hypotheses About Proportions
Copyright © Cengage Learning. All rights reserved.
The Propositional Calculus
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Knowledge-Based Systems Chapter 17.
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Symbolic Reasoning under uncertainty
Chapter 7: Beyond Definite Knowledge
Propositional Calculus: Boolean Algebra and Simplification
Logics for Data and Knowledge Representation
Testing Hypotheses about Proportions
Reasoning with Uncertainty
Logics for Data and Knowledge Representation
Logic Use mathematical deduction to derive new knowledge.
CPSC 322 Introduction to Artificial Intelligence
Back to “Serious” Topics…
MA/CSSE 474 More Math Review Theory of Computation
Testing Hypotheses About Proportions
Computer Security: Art and Science, 2nd Edition
Artificial Intelligence Chapter 17. Knowledge-Based Systems
Reasoning with Uncertainty Piyush Porwal ( ) Rohit Jhunjhunwala ( ) Srivatsa R. ( ) Under the guidance of Prof. Pushpak Bhattacharyya.
Chapter 7. Propositional and Predicate Logic
Artificial Intelligence Chapter 17 Knowledge-Based Systems
CSNB234 ARTIFICIAL INTELLIGENCE
Representations & Reasoning Systems (RRS) (2.2)
Presentation transcript:

Contradiction-tolerant TMS CTMS extends the original truth maintenance approach as follows: Makes it possible to represent and reason about knowledge which is both, incomplete and uncertain. Makes it possible to explicitly maintain uncertainty during the reasoning process, similar to systems for handling uncertain knowledge. Makes it possible to handle different types of inconsistencies without trying to resolve them before the reasoning process is completed.

Contradiction-tolerant TMS: motivation If we assume that human beliefs can be divided into two groups, one containing beliefs that will never change (“firm beliefs”), and the other containing tentative beliefs, then in order to represent both types of beliefs we need two types of propositions: Certain propositions, which represent firm beliefs. They can be true or false, but we can eliminate false propositions by considering their negations as true propositions. We say that a proposition is true, if the evidence associated with that proposition confirms it. Uncertain (plausible) propositions, which represent tentative beliefs. Their plausibility (or “degree of belief”) depends on the evidence associated with them. We can view unknown propositions as a special case of uncertain propositions, which have no evidence whatsoever.

Contradiction-tolerant TMS: motivation Note that we cannot use uncertain propositions as premises for either monotonic or non-monotonic inference rules, where each premise is assumed to be believed or disbelieved. One possible way to explicitly account for the uncertainty of the premises is to “relax'' inference rules by dividing their premises into two groups: The first group consists of those premises which, if true, constitute the minimal evidence for considering the rule's conclusion as a plausible belief. The second group consists of premises which, along with the premises of the first group, form the complete evidence for the conclusion.

CTMS: motivation (cont.) Thereby, we get inference rules of the form (A1, ..., Ai)(Ai+1,...,An)  B which are interpreted as follows: “If A1, ..., Ai are true beliefs, in spite of the fact that Ai+1,...,An are not known to be true, B is a plausible belief.'' Note that if Ai+1,...,An are consistent assumptions, we get a semi-normal default rule A1, ..., Ai : B,  Ai+1,...,  An ------------------------------------------- B However, contrary to default rules, CTMS-rules do not require any sort of consistency check, because it is assumed that the minimal evidence A1, ..., Ai is enough to support B as a plausible belief, even if its negation is also a plausible, or even a true belief.

CTMS contradictions Note: Rule conclusion, B, and its negation, B, may be supported by different sets of arguments, and in most cases there may not be an objective criterion according to which these sets of arguments can be compared. However, if both B and B happen to be true at the same time we get a contradiction. This can happen, for example, if the system receives its knowledge (data and rules) from independent sources, and there is no way to resolve the conflict at the meta-level before entering the contradictory rules or facts into the system. Such contradictions are called objective contradictions. Their resolution is not trivial because the system has no reason to prefer one of the contradictory beliefs over the other one (as it is the case with the plausibility theory or JTMS). The only way for the system to cope with objective contradictions is to appropriately revise its knowledge so as to make it compatible with the detected contradictions. But this should happen only after the reasoning process is completed and all the evidence relevant to conflicting beliefs is collected.

Logical specification of CTMS To formally describe both certain and plausible beliefs, we need two different but closely related languages, L and Le. L consists of propositional constants A, B, C, ...(possibly subscripted), and a special operator,  , such that if A is a wff, then A is a wff too. We do not assign any logical properties to this operator. The only function of L is to provide a set of wffs that might become elements of the system's set of beliefs.

Logical specification of CTMS (cont.) The language Le consists of special objects called endorsed formulas, which we denote as α, β, γ, … Endorsed formulas have the following form: α = ALV:(T1,...,Tn)(P1,...,Pm), where: A  L is one of the system's beliefs; we refer to it as the head of the endorsed formula α. LV is the label, which shows the type of A. It can be: T, if A is a logically true belief. T*, if A is an evidentially true belief (i.e. true with respect to a given context). P, if A is a plausible belief. T1,...,Tn  L is a set of true arguments for A. We call it the T-set for A. The only restriction over the T-set is that  A should not belong to it. P1,...,Pm  L is a set of beliefs upon which A depends, but which are not known to be true. We call them the P-set for A. If later some of these beliefs turns out to be true, then it will become an additional evidence for A, and will be shifted to the corresponding T-set.

Knowledge representation in CTMS Depending on the label and content of the T- and P-sets, we distinguish between: T-formulas. These have the label T or T*, and empty P-sets, and represent logically or evidentially true beliefs. T-formulas with both T- and P-sets empty represent input data. P-formulas. These have the label P, and non-empty P-sets and represent plausible beliefs. P-formulas with both T- and P-sets empty represent beliefs that the CTMS knows nothing about. Definition. Every finite set of endorsed formulas is called a set of beliefs (SB).

CTMS rules CTMS-inference rules are expressions of the following two types: (T1,...,Tn)()  AT , called a T-rule. Here: T1,...,Tn are called T-premises, A is the head of the T-formula AT :(T1,...,Tn)(), which is the conclusion of the T-rule. (T1,...,Tn)(P1,...,Pm),  AP , called a P-rule. Here: P1,...,Pm are called P-premises, A is the head of the P-formula AP : (T1,...,Tn)(P1,...,Pm), which is the conclusion of the P-rule. T-rules represent deductive knowledge about the domain, and P-rules sanction a new belief if the minimal evidence for that belief has been established.

Examples of P-rules Example 1: Consider the following rule (BirdTweety) (PenguinTweety, OstrichTweety)  FliesTweetyP, If BirdTweety is true, then we can derive FliesTweety, even though we do not know if Tweety is a penguin or an ostrich. Example 2: Consider the rule (PositiveResultOfTheBodyScanner) (Headache, Neurosis, MentalDisturbances)  DiagnoseBrainTumorP This rule implies DiagnoseBrainTumor as a possible diagnosis if PositiveResultOfTheBodyScanner is true, even though additional symptoms such as headache, neurosis and mental disturbances, which usually accompany the disease, are not observed in a particular case.

If any (or several, or all in the extreme case) of the P-premises of a P-rule become true, then the degree of belief in the rule's conclusion should increase. In such cases the so-called duplicate rules are used instead of the original P-rules. Definition. Let (T1,...,Tn)(P1,...,Pm),  AP be a P-rule. Then: (T1,...,Tn, P1,...,Pm)(),  AT* is also an inference rule. It is called the T-duplicate. For any i1,...,ik  1,...,m, (T1,...,Tn, Pi1, … , Pik)({P1,...,Pm} \ {Pi1, … , Pik}),  AP are also inference rules. They are called P-duplicates. T- and P-duplicates form a set of plausible reasoning patterns that provide different degrees of plausibility for the conclusion of the original P-rule, depending on the current evidence about it. The T-duplicate captures the case in which all elements of the P-set are heads of T-formulas from SB. Clearly, the conclusion of such a rule should also be a true formula according to the evidence which was supposed to be the only evidence relevant to it. Definition. A pair < SB, R>, where SB is a set of beliefs, and R is a set of inference rules, is called a CTMS-theory.

Consider the rule (BirdTweety)( PenguinTweety,  OstrichTweety)  FliesTweetyP If we learn that Tweety is not a penguin, then our belief in flying abilities of Tweety should increase. If we also learn that Tweety is not an ostrich, then according to this rule we must conclude that Tweety does fly. But note that Tweety might also be an emu, a non-flying bird we did not know about , and thus it is not included in the rule's P-set. In order to minimize the negative effect of such ignorance we must distinguish between: T-formulas inferred by T-rules which are logically true. They are marked with the label T. T-formulas inferred by T-duplicates which are evidentially true. They are marked with the label T*.

Representing contradictions in CTMS Definition. Let < SB, R > be a CTMS-theory, α = AT:(T1,...,Tn)()  SB and β =  AT: T1,...,Tn)()  SB. The set {α, β} is called a conflicting set and it identifies a (logical) contradiction between the two beliefs, A and  A. Each contradiction is described as an endorsed formula of the form C XP:(X,  X)(CX), where: T-set consists of the heads of the conflicting endorsed formulas, P-set contains the contradiction itself. Definition. The set of endorsed formulas describing contradictions is called the contradiction-set, and is denoted by CS. Note: In the same way we can describe contradictions between semantically incompatible beliefs. That is, if A and B are incompatible beliefs, and we want to declare that they can never exist together, we must enter the following endorsed formula: CABP: (A, B)(CAB). CTMS maintains contradictions in exactly the same way as all other data.

Rule revision To maintain the consistency of the belief set in the presence of a contradiction, CTMS revises inference rules whose T-premises depend on the contradictory belief as follows: If ((T1,...,Tn)()  AT )  R, and for some j (j=1,n), both Tj and  Tj are heads of T-formulas from SB, then rev ((T1,...,Tn)()  AT ) = ((T1,...,Tn)(CTj)  AP ), where CTjP: (Tj,  Tj)(CTj). That is, the original T-rule is transformed into a P-rule which, in turn, changes the status of its conclusion from T-formula to P-formula. Also, adding the head of contradiction CTjP: (Tj,  Tj)(CTj) to the P-set of the revised rule, points out the culprit for that contradiction. If ((T1,...,Tn)(P1,...,Pm),  AP)  R, and for some j (j=1,n), both Tj and  Tj are heads of T-formulas from SB, then rev ((T1,...,Tn)(P1,...,Pm)  AP) = (T1,...,Tn)(P1,...,Pm ,CTj)  AP ), where CTjP: (Tj,  Tj)(CTj).

Example Consider the P-rule (BirdTweety)( PenguinTweety,  OstrichTweety)  FliesTweetyP and assume PenguinTweety. Since penguins do not fly, the following T-rule should belong to the set of rules: (PenguinTweety)()  FliesTweetyT According to this rule, CTMS will derive FliesTweetyT: (PenguinTweety)() On the other hand, according to the P-rule above, CTMS will derive FliesTweetyP: (BirdTweety)( PenguinTweety,  OstrichTweety) Note: There is no contradiction between the two derived formulas, because a contradiction only occurs when both a formula and its negation, are heads of T-formulas from SB. Also, resolving the conflict between FliesTweetyP and FliesTweetyT is easy, because CTMS will always prefer a T-formula over a P-formula.

Stable extensions of CTMS theories Given a CTMS-theory <SB, R>, CTMS is intended to construct an extended theory <SB+, rev(R)>, containing all beliefs from SB, all beliefs that follow from <SB, R>, and all contradictions detected during the inference process. Definition. Let <SB, R> be a CTMS-theory, and α = ALV:(T1,...,Tn)(P1,...,Pm), be an endorsed formula. We say that α follows from <SB, R> if and only if there exists a rule (T1,...,Tn)(P1,...,Pm)  ALV  R such that T1,...,Tn are heads of T-formulas from SB, and P1,...,Pm are heads of P-formulas from SB. We denote the set of all endorsed formulas that follow from <SB, R> by Cons(SB,R), and the new (extended) set of beliefs by SB+. Definition. Let <SB, R> be a CTMS-theory. The set SB+ is called the stable extension of <SB, R> if the following two conditions are satisfied: SB+ does not contain new conflicting sets, i.e. all contradictions in SB+ have been inherited from SB. SB+ is closed with respect to the rules of R, i.e. for every rule (T1,...,Tn)(P1,...,Pm)  ALV  R such that T1,...,Tn are heads of T-formulas from SB, and P1,...,Pm are heads of P-formulas from SB, α = ALV:(T1,...,Tn)(P1,...,Pm)  SB+ .

CTMS algorithm Let SBi, CSi, Ri be the current set of beliefs, contradiction-set, and set of inference rules, respectively. The construction of the stable extension of the CTMS-theory <SB, R> is carried out according to the following inductive procedure. Basic step: Actualization of the set of beliefs: SB0 = SB. Actualization of the contradiction-set: j (AjT:(T1,...,Tn)(),  AjT:(Tl,...,Tm)() SB0), (CAjP: (Aj,  Aj)(CAj) CS0. Revision of the affected rules: If CS0 = { }, then R0 = R. If CS0 ≠ { }, then R0 = rev(R). Inductive step: Actualization of the set of beliefs: SBi = SBi-1  Cons(SBi-1,Ri-1)  CSi-1

Actualization of the contradiction-set: j (AjT:(T1,...,Tn)(),  AjT:(Tl,...,Tm)() SBi , but (CAjP: (Aj,  Aj)(CAj) CSi-1), (CAjP: (Aj,  Aj)(CAj) CSi. Revision of the affected rules: If CSi = { }, then Ri = Ri-1. If CSi ≠ { }, then Ri = rev(Ri-1). Revision of the affected beliefs: If there is a formula AjLV:(T1,...,Tn)(P1, …, Pm)  SBi, such that for some j (j=1,n), CTjP: (Tj,  Tj)(CTj)  CSi, then: If it is a T-formula, then rev(AT: (T1,...,Tn)()) = AP: (T1,...,Tn)(CTj). If it is a P-formula, then rev(AP: (T1,...,Tn)(P1,...,Pm)) = AP: (T1,...,Tn)(P1,...,Pm, CTj). Check whether the stable extension has been achieved. If yes, then stop. Every CTMS-theory <SB, R> has a unique stable extension if the initial sets of beliefs and rules are finite.

More plausible vs less plausible beliefs The stable extension of a CTMS-theory may contain endorsed formulas with the same heads but different T- and P-sets, which have been inferred by different T- and P- duplicates of P-rules, as well as formulas whose heads are contradictory literals. We call such formulas competitive. To define which of the competitive endorsed formulas are the most plausible conclusions of a given CTMS-theory, we introduce an order among competitive endorsed formulas by means of the following rules: If α = AT* :(T1,...,Tn)(), β = AT:(T1’ ,..., Ti‘ )(), and T1,...,Tn  T1‘ ,..., Ti’, then α < β. If the T-sets of α and β are incomparable, then α = β. If α = AP:(T1,...,Tn)(P1,...,Pm), β = AP:(T1’ ,..., Ti‘ )(P1‘ ,...,Pj‘ ), and T1,...,Tn  T1’ ,..., Ti‘ , then α < β. If the T-sets of α and β are incomparable, then α = β. If α = AP:(T1,...,Tn)(P1,...,Pm) and β = AT:(T1 ,..., Ti )(), then α < β. If α =  AP:(T1,...,Tn)(P1,...,Pm) and β = AT:(T1 ,..., Ti )(), and some element of the P-set of α is also an element of the T-set of β, then α < β. If α =  AT:(T1,...,Tn)() and β = AT:(Tl,...,Ts)(), then α = β. Each set of competitive endorsed formulas has one or more maximal elements according to this order. These maximal elements are the most plausible conclusions of the CTMS- theory.