Cooperating Intelligent Systems Logical agents Chapter 7, AIMA This presentation owes some to V. Rutgers and D. OSU.

Slides:



Advertisements
Similar presentations
Russell and Norvig Chapter 7
Advertisements

Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
1 Problem Solving CS 331 Dr M M Awais Representational Methods Formal Methods Propositional Logic Predicate Logic.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Cooperating Intelligent Systems Logical agents Chapter 7, AIMA This presentation owes some to V. Rutgers and D. OSU.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
Logical Agents Chapter 7.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Artificial Intelligence
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 5 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2008.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
1 Problems in AI Problem Formulation Uninformed Search Heuristic Search Adversarial Search (Multi-agents) Knowledge RepresentationKnowledge Representation.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Knowledge Representation Lecture # 17, 18 & 19. Motivation (1) Up to now, we concentrated on search methods in worlds that can be relatively easily represented.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Russell & Norvig Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean)
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
CS666 AI P. T. Chung Logic Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #6
Knowledge Representation and Reasoning
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Logical Agents Chapter 7.
Artificial Intelligence
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Logical Agents Chapter 7.
Knowledge Representation I (Propositional Logic)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Cooperating Intelligent Systems Logical agents Chapter 7, AIMA This presentation owes some to V. Rutgers and D. OSU

Motivation for knowledge representation The search programs so far have been ”special purpose” – we have to code everything into them. We need something more general, where it suffices to tell the rules of the game.

The Wumpus World

Start position = (1,1) Always safe

The Wumpus World Goal: Get the gold

The Wumpus World Problem 1: Big, hairy, smelly, dangerous Wumpus. Will eat you if you run into it, but you can smell it a block away. You have one (1) arrow for shooting it.

The Wumpus World Problem 2: Big, bottomless pits where you fall down. You can feel the breeze when you are near them.

The Wumpus WorldWumpus PEAS description Performance measure: for gold for being eaten or falling down pit -1 for each action -10 for using the arrow Environment: 44 grid of ”rooms”, each ”room” can be empty, with gold, occupied by Wumpus, or with a pit. Actuators: Move forward, turn left 90, turn right 90 Grab, shoot Sensors: Olfactory – stench from Wumpus Touch – breeze (pits) & hardness (wall) Vision – see gold Auditory – hear Wumpus scream when killed

The Wumpus WorldWumpus PEAS description Performance measure: for gold for being eaten or falling down pit -1 for each action -10 for using the arrow Environment: 44 grid of ”rooms”, each ”room” can be empty, with gold, occupied by Wumpus, or with a pit. Acuators: Move forward, turn left 90, turn right 90 Grab, shoot Sensors: Olfactory – stench from Wumpus Touch – breeze (pits) & hardness (wall) Vision – see gold Auditory – hear Wumpus scream when killed

Exploring the Wumpus world A ok Slide adapted from V. Pavlovic Agent senses nothing (no breeze, no smell,..)

Exploring the Wumpus world B A ok P? Agent feels a breeze Slide adapted from V. Pavlovic A

Exploring the Wumpus world B ok P? Agent feels a foul smell Slide adapted from V. Pavlovic A S W?

Exploring the Wumpus world B ok P? Wumpus can’t be here since there was no smell there... Slide adapted from V. Pavlovic A S W? Pit can’t be there since there was no breeze here... ok

Exploring the Wumpus world B ok P? P Slide adapted from V. Pavlovic S W W? A ok A Agent senses nothing (no breeze, no smell,..) ok P W

Exploring the Wumpus world B ok P? P Slide adapted from V. Pavlovic S W W? ok A Agent senses breeze, smell, and sees gold! ok A BSG

Exploring the Wumpus world B ok P? P Slide adapted from V. Pavlovic S W W? ok Grab the gold and get out! ok A BSG A

Exploring the Wumpus world B ok P? Wumpus can’t be here since there was no smell there... Slide adapted from V. Pavlovic A S W? Pit can’t be there since there was no breeze here... How do we automate this kind of reasoning? (How can we make these inferences automatically?)

Logic Logic is a formal language for representing information such that conclusions can be drawn A logic has –Syntax that specifies symbols in the language and how they can be combined to form sentences –Semantics that specifies what facts in the world a semantics refers to. Assigns truth values to sentences based on their meaning in the world. –Inference procedure, a mechanical method for computing (deriving) new (true) sentences from existing sentences

Entailment The sentence A entails the sentence B If A is true, then B must also be true B is a ”logical consequence” of A Let’s explore this concept a bit... A ⊨ B

Example: Wumpus entailment Agent’s knowledge base (KB) after having visited (1,1) and (1,2): 1)The rules of the game (PEAS) 2)Nothing in (1,1) 3)Breeze in (1,2) Which models (states of the world) match these observations?

Example: Wumpus entailment We only care about neighboring rooms, i.e. {(2,1),(2,2),(1,3)}. We can’t know anything about the other rooms. We care about pits, because we have detected a breeze. We don’t want to fall down a pit. There are 2 3 =8 possible arrangements of {pit, no pit} in the three neighboring rooms Possible conclusions:  1 : There is no pit in (2,1)  2 : There is no pit in (2,2)  3 : There is no pit in (1,3)

The eight possible situations...

The eight possible situations......let’s explore this conclusion  1 : There is no pit in (2,1)

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  1 = The set of models that agrees with conclusion  1 [conclusion  1 is true in these models] If KB is true, then  1 is also true. KB entails  1. KB ⊨  1  1 : There is no pit in (2,1)

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  2 = The set of models that agrees with conclusion  2 [conclusion  2 is true in these models] KB ⊭  2 If KB is true, then  2 is not also true. KB does not entail  2.  2 : There is no pit in (2,2)

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  3 = The set of models that agrees with conclusion  3 [conclusion  3 is true in these models] ?  3 : There is no pit in (1,3) 33

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  3 = The set of models that agrees with conclusion  3 [conclusion  3 is true in these models] If KB is true, then  3 is not also true. KB does not entail  3.  3 : There is no pit in (1,3) KB ⊭  3 33

Inference engine We need an algorithm (a method) that automatically produces the entailed conclusions. We will call this an ”inference engine”

Inference engine ”A is derived from KB by inference engine i ” Truth-preserving: i only derives entailed sentences Complete: i derives all entailed sentences KB ⊢ i A We want inference engines that are both truth-preserving and complete

Atomic sentence = a single propositional symbol e.g. P, Q, P 13, W 31, G 32, T, F Complex sentence = combination of simple sentences using connectives ¬ (not) negation ∧ (and) conjunction ∨ (or) disjunction ⇒ (implies) implication ⇔ (iff = if and only if) biconditional Propositional (boolean) logic Syntax Wumpus in room (3,1)Pit in room (1,3) P 13 ∧ W 31 W 31 ⇒ S 32 W 31 ∨ ¬ W 31 Precedence: ¬,∧,∨,⇒,⇔

Semantics: The rules for whether a sentence is true or false T (true) is true in every model F (false) is false in every model The truth values for other proposition symbols are specified in the model. Truth values for complex sentences are specified in a truth table Propositional (boolean) logic Semantics Atomic sentences

Boolean truth table PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False True False True Please complete this table...

Boolean truth table PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue

Boolean truth table Not P is the opposite of P PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue

Boolean truth table PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue P ∧ Q is true only when both P and Q are true

Boolean truth table P ∨ Q is true when either P or Q is true PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue

Boolean truth table PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue P ⇒ Q : If P is true then we claim that Q is true, otherwise we make no claim

Boolean truth table PQ ¬P¬PP∧QP∧QP∨QP∨QP⇒QP⇒QP⇔QP⇔Q False TrueFalse True FalseTrue FalseTrue False TrueFalse TrueFalse True FalseTrue P ⇔ Q is true when the truth values for P and Q are identical

Boolean truth table PQ P⊕QP⊕Q False True FalseTrue False The exlusive or (XOR) is different from the OR

Graphical illustration of truth table P Q (1,0) = [P is true, Q is false] (1,1) = [P is true, Q is true] P Q P ∧ Q Black means true, white means false P Q P ∨ Q P Q P ⇒ Q P Q P ⇔ Q P Q P ⊕ Q ≡ ¬ (P ⇔ Q)

Example: Wumpus KB Knowledge base Nothing in (1,1) 2.Breeze in (1,2) R1:R1: ¬ P 11 R2:R2: ¬ B 11 R3:R3: ¬ W 11 R4:R4: ¬ S 11 R5:R5: ¬ G 11 R6:R6:B 12 R7:R7: ¬ P 12 R8:R8: ¬ S 12 R9:R9: ¬ W 12 R 10 : ¬ G 12 KB = R 1 ∧ R 2 ∧ R 3 ∧ R 4 ∧ R 5 ∧ R 6 ∧ R 7 ∧ R 8 ∧ R 9 ∧ R 10 Interesting sentences [tell us what is in neighbor squares] Plus the rules of the game

Example: Wumpus KB Knowledge base Nothing in (1,1) 2.Breeze in (1,2) KB = R 1 ∧ R 2 ∧ R 3 ∧ R 4 ∧ R 5 ∧ R 6 ∧ R 7 ∧ R 8 ∧ R 9 ∧ R 10 R1:R1: ¬ P 11 R2:R2: ¬ B 11 ⇔ ¬ (P 21 ∨ P 12 ) R3:R3: ¬ W 11 R4:R4: ¬ S 11 ⇔ ¬ (W 21 ∨ W 12 ) R5:R5: ¬ G 11 R6:R6:B 12 ⇔ (P 11 ∨ P 22 ∨ P 13 ) R7:R7: ¬ P 12 R8:R8: ¬ S 12 ⇔ ¬ (W 11 ∨ W 21 ∨ W 13 ) R9:R9: ¬ W 12 (already in R 4 ) R 10 : ¬ G 12 We infer this from the rules of the game Plus the rules of the game

Inference by enumerating models What is in squares (1,3), (2,1), and (2,2)? #W 21 W 22 W 13 P 21 P 22 P 13 R2R2 R4R4 R6R6 R8R ⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮ We have 6 interesting sentences: W 21, W 22, W 13, P 21, P 22, P 13 : 2 6 = 64 comb. KB (interesting sentences) KB true

Inference by enumerating models What is in squares (1,3), (2,1), and (2,2)? #W 21 W 22 W 13 P 21 P 22 P 13 R2R2 R4R4 R6R6 R8R ⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮ KB true What do we deduce from this?

Inference by enumerating models What is in squares (1,3), (2,1), and (2,2)? #W 21 W 22 W 13 P 21 P 22 P 13 R2R2 R4R4 R6R6 R8R ⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮ KB true KB ⊨ ¬ W 21 ∧ ¬ W 22 ∧ ¬ W 13 ∧ ¬ P 21

Inference by enumerating models Implement as a depth-first search on a constraint graph (backtracking) Time complexity ~ O(2 n ) where n is the number of relevant sentences Space complexity ~ O(n) Not very efficient...

Inference methods 1.Model checking (enumerating) - Just seen that 2.Using rules of inference - Coming next

Some definitions Equivalence: A ≡ B iff A ⊨ B and B ⊨ A Validity: A valid sentence is true in all models (a tautology) A ⊨ B iff (A ⇒ B) is valid Satisfiability: A sentence is satisfiable if it is true in some model A ⊨ B iff (A ∧ ¬ B) is unsatisfiable Let’s explore satisfiability first...

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  1 = The set of models that agrees with conclusion  1 [conclusion  1 is true in these models] If KB is true, then  1 is also true. KB entails  1. KB ⊨  1 KB ⊆  1 ¬1¬1 KB ^ ¬  1 never true

Some definitions Equivalence: A ≡ B iff A ⊨ B and B ⊨ A Validity: A valid sentence is true in all models (a tautology) A ⊨ B iff (A ⇒ B) is valid Satisfiability: A sentence is satisfiable if it is true in some model A ⊨ B iff (A ∧ ¬ B) is unsatisfiable AB A⇒BA⇒BA∧¬BA∧¬B False TrueFalse True False TrueFalse True False

Some definitions Equivalence: A ≡ B iff A ⊨ B and B ⊨ A Validity: A valid sentence is true in all models (a tautology) A ⊨ B iff (A ⇒ B) is valid Satisfiability: A sentence is satisfiable if it is true in some model A ⊨ B iff (A ∧ ¬ B) is unsatisfiable AB A⇒BA⇒BA∧¬BA∧¬B False TrueFalse True False TrueFalse True False

Some definitions Equivalence: A ≡ B iff A ⊨ B and B ⊨ A Validity: A valid sentence is true in all models (a tautology) A ⊨ B iff (A ⇒ B) is valid Satisfiability: A sentence is satisfiable if it is true in some model A ⊨ B iff (A ∧ ¬ B) is unsatisfiable A ⊨ B means that the set of models where A is true is a subset of the models where B is true: A ⊆ B B ⊨ A means that the set of models where B is true is a subset of the models where A is true: B ⊆ A Therefore, the set of models where A is true must be equal to the set of models where B is true: A ≡ B A B B A A≡BA≡B

Some definitions Equivalence: A ≡ B iff A ⊨ B and B ⊨ A Validity: A valid sentence is true in all models (a tautology) A ⊨ B iff (A ⇒ B) is valid Satisfiability: A sentence is satisfiable if it is true in some model A ⊨ B iff (A ∧ ¬ B) is unsatisfiable AB A⇒BA⇒B False True FalseTrue False True KB ⊨  1

Logical equivalences (A ∧ B) ≡ (B ∧ A) ∧ is commutative (A ∨ B) ≡ (B ∨ A) ∨ is commutative ((A ∧ B) ∧ C) ≡ (A ∧ (B ∧ C)) ∧ is associative ((A ∨ B) ∨ C) ≡ (A ∨ (B ∨ C)) ∨ is associative ¬ ( ¬ A) ≡ ADouble-negation elimination (A ⇒ B) ≡ ( ¬ B ⇒ ¬ A) Contraposition (A ⇒ B) ≡ ( ¬ A ∨ B) Implication elimination (A ⇔ B) ≡ ((A ⇒ B) ∧ (B ⇒ A)) Biconditional elimination ¬ (A ∧ B) ≡ ( ¬ A ∨ ¬ B) ”De Morgan” ¬ (A ∨ B) ≡ ( ¬ A ∧ ¬ B) ”De Morgan” (A ∧ (B ∨ C)) ≡ ((A ∧ B) ∨ (A ∧ C))Distributivity of ∧ over ∨ (A ∨ (B ∧ C)) ≡ ((A ∨ B) ∧ (A ∨ C))Distributivity of ∨ over ∧

Logical equivalences (A ∧ B) ≡ (B ∧ A) ∧ is commutative (A ∨ B) ≡ (B ∨ A) ∨ is commutative ((A ∧ B) ∧ C) ≡ (A ∧ (B ∧ C)) ∧ is associative ((A ∨ B) ∨ C) ≡ (A ∨ (B ∨ C)) ∨ is associative ¬ ( ¬ A) ≡ ADouble-negation elimination (A ⇒ B) ≡ ( ¬ B ⇒ ¬ A) Contraposition (A ⇒ B) ≡ ( ¬ A ∨ B) Implication elimination (A ⇔ B) ≡ ((A ⇒ B) ∧ (B ⇒ A)) Biconditional elimination ¬ (A ∧ B) ≡ ( ¬ A ∨ ¬ B) ”De Morgan” ¬ (A ∨ B) ≡ ( ¬ A ∧ ¬ B) ”De Morgan” (A ∧ (B ∨ C)) ≡ ((A ∧ B) ∨ (A ∧ C))Distributivity of ∧ over ∨ (A ∨ (B ∧ C)) ≡ ((A ∨ B) ∧ (A ∨ C))Distributivity of ∨ over ∧ Work out these on paper for yourself, before we move on...

Inference rules Inference rules are written as If the KB contains the antecedent, you can add the consequent (the KB entails the consequent) Slide adapted from D. Byron

Inference rules Inference rules are written as If the KB contains the antecedent, you can add the consequent (the KB entails the consequent) Slide adapted from D. Byron

Commonly used inference rules Modus Ponens Modus Tolens Unit Resolution And Elimination Or introduction And introduction Slide adapted from D. Byron

Commonly used inference rules Modus Ponens Modus Tolens Unit Resolution And Elimination Or introduction And introduction Slide adapted from D. Byron Work out these on paper for yourself too

Example: Proof in Wumpus KB Knowledge base Nothing in (1,1) R1:R1: ¬ P 11 R2:R2: ¬ B 11 R3:R3: ¬ W 11 R4:R4: ¬ S 11 R5:R5: ¬ G 11

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan”

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan” (A ⇔ B) ≡ ((A ⇒ B) ∧ (B ⇒ A))

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan”

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan” (A ⇒ B) ≡ (¬B ⇒ ¬A)

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan” ¬(A ∨ B) ≡ (¬A ∧ ¬B)

Proof in Wumpus KB B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination (P 12 ∨ P 21 ) ⇒ B 11 And elimination ¬ B 11 ⇒ ¬ (P 12 ∨ P 21 ) Contraposition ¬ B 11 ⇒ ¬ P 12 ∧ ¬ P 21 ”De Morgan” Thus, we have proved, in four steps, that no breeze in (1,1) means there can be no pit in either (1,2) or (2,1) The machine can come to this conclusion all by itself if we give the rules of the game. More efficient than enumerating models.

The Resolution rule An inference algorithm is guaranteed to be complete if it uses the resolution rule Unit resolution Full resolution (A  B)  B A

The Resolution rule An inference algorithm is guaranteed to be complete if it uses the resolution rule Unit resolution Full resolution A clause = a disjunction ( ∨ ) of literals (sentences)

The Resolution rule An inference algorithm is guaranteed to be complete if it uses the resolution rule Note: The resulting clause should only contain one copy of each literal.

Resolution truth table AB ¬B¬B C A∨BA∨B ¬B∨C¬B∨CA∨CA∨C ((A ∨ B) ∧ ( ¬ B ∨ C)) ⇒ (A ∨ C)

Resolution truth table AB ¬B¬B C A∨BA∨B ¬B∨C¬B∨CA∨CA∨C ((A ∨ B) ∧ ( ¬ B ∨ C)) ⇒ (A ∨ C) Proof for the resolution rule

Conjunctive normal form (CNF) Every sentence of propositional logic is equivalent to a conjunction or disjunction of literals. Sentences expressed in this way are in conjunctive normal form (CNF) [if conjunctions are used] A sentence with exactly k literals per clause is expressed in k -CNF This is good, it means we can get far with the resolution inference rule.

Wumpus CNF example B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination ( ¬ B 11 ∨ (P 12 ∨ P 21 )) ∧ ( ¬ (P 12 ∨ P 21 ) ∨ B 11 ) Implication elimination ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ (( ¬ P 12 ∧ ¬ P 21 ) ∨ B 11 ) ”De Morgan” ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ (( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 )) Distributivity ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 ) Voilá – CNF (A ⇒ B) ≡ ( ¬ A ∨ B) (A ⇔ B) ≡ ((A ⇒ B) ∧ (B ⇒ A)) ¬ (A ∨ B) ≡ ( ¬ A ∧ ¬ B) (A ∨ (B ∧ C)) ≡ ((A ∨ B) ∧ (A ∨ C))

Wumpus CNF example B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game B 11 ⇒ (P 12 ∨ P 21 ) ∧ (P 12 ∨ P 21 ) ⇒ B 11 Biconditional elimination ( ¬ B 11 ∨ (P 12 ∨ P 21 )) ∧ ( ¬ (P 12 ∨ P 21 ) ∨ B 11 ) Implication elimination ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ (( ¬ P 12 ∧ ¬ P 21 ) ∨ B 11 ) ”De Morgan” ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ (( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 )) Distributivity ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 ) Voilá – CNF

The resolution refutation algorithm Proves by the principle of contradiction: Show that KB ⊨  by proving that (KB ∧ ¬  ) is unsatisfiable. 1.Convert (KB ∧ ¬  ) to CNF 2.Apply the resolution inference rule repeatedly to the resulting clauses 3.Continue until: (a) No more clauses can be added, KB ⊭  (b) The empty clause ( ∅ ) is produced, KB ⊨ 

KB = The set of models that agrees with the knowledge base (the observed facts) [The KB is true in these models]  1 = The set of models that agrees with conclusion  1 [conclusion  1 is true in these models] If KB is true, then  1 is also true. KB entails  1. KB ⊨  1 KB ⊆  1 ¬1¬1 KB ^ ¬  1 never true

Wumpus resolution example B 11 ⇔ (P 12 ∨ P 21 ) Rule of the game ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 ) CNF ¬ B 11 Observation ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨ ¬ P 21 ) ∧ ¬ B 11 KB in CNF ¬ P 21 Hypothesis (  ) KB ∧ ¬  = ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨¬ P 21 ) ∧¬ B 11 ∧ P 21

Wumpus resolution example KB ∧ ¬  = ( ¬ B 11 ∨ P 12 ∨ P 21 ) ∧ ( ¬ P 12 ∨ B 11 ) ∧ (B 11 ∨¬ P 21 ) ∧¬ B 11 ∧ P 21 ¬ P 21 (B 11 ∨ ¬ P 21 ), ¬ B 11 ∅ P 21, ¬ P 21 Not satisfied, we conclude that KB ⊨ 

Completeness of resolution S = Set of clauses RC(S) = Resolution closure of S RC(S) = Set of all clauses that can be derived from S by the resolution inference rule. RC(S) has finite cardinality (finite number of symbols P 1, P 2,..., P k ) ⇒ Resolution refutation must terminate.

Completeness of resolution The ground resolution theorem If a set S is unsatisfiable, then RC(S) contains the empty clause ∅. Left without proof.

Horn clauses and forward- backward chaining Restricted set of clauses: Horn clauses disjunction of literals where at most one is positive, e.g., ( ¬ A 1 ∨ ¬ A 2 ∨ ⋯ ∨ ¬ A k ∨ B) or ( ¬ A 1 ∨ ¬ A 2 ∨ ⋯ ∨ ¬ A k ) Why Horn clauses? Every Horn clause can be written as an implication, e.g., ( ¬ A 1 ∨ ¬ A 2 ∨ ⋯ ∨ ¬ A k ∨ B) ≡ (A 1 ∧ A 2 ∧ ⋯ ∧ A k ) ⇒ B ( ¬ A 1 ∨ ¬ A 2 ∨ ⋯ ∨ ¬ A k ) ≡ (A 1 ∧ A 2 ∧ ⋯ ∧ A k ) ⇒ False Inference in Horn clauses can be done using forward- backward (F-B) chaining in linear time Slide adapted from V. Pavlovic

Forward or Backward? Inference can be run forward or backward Forward-chaining: –Use the current facts in the KB to trigger all possible inferences Backward-chaining: –Work backward from the query proposition Q –If a rule has Q as a conclusion, see if antecedents can be found to be true Slide adapted from D. Byron

Example of forward chaining Slide adapted from V. Pavlovic (who borrowed from Lee?) KB A B L M P Q Agenda AND-OR graph Every step is Modus Ponens, e.g. We’ve proved that Q is true

Slide adapted from Lee Example of backward chaining KB

Wumpus world revisited 1-16 B i,j ⇔ (P i,j+1 ∨ P i,j-1 ∨ P i-1,j ∨ P i+1,j ) ROG: Pits S i,j ⇔ (W i,j+1 ∨ W i,j-1 ∨ W i-1,j ∨ W i+1,j ) ROG: Wumpus’ odor 33 (W 1,1 ∨ W 1,2 ∨ W 1,3 ∨ ⋯ ∨ W 4,3 ∨ W 4,4 ) ROG: #W ≥ ¬ (W i,j ∧ W k,l ) ROG: #W ≤ (G 1,1 ∨ G 1,2 ∨ G 1,3 ∨ ⋯ ∨ G 4,3 ∨ G 4,4 ) ROG: #G ≥ ¬ (G i,j ∧ G k,l ) ROG: #G ≤ ( ¬ B 11 ∧ ¬ W 11 ∧ ¬ G 11 ) ROG: Start safe Knowledge base (KB) in initial position (ROG = Rule of the Game) There are 5 ”on-states” for every square, {W,P,S,B,G}. A 4  4 lattice has 16  5 = 80 distinct symbols. Enumerating models means going through 2 80 models! The physics rules (1-32) are very unsatisfying – no generalization.

Summary Knowledge is in the form of sentences in a knowledge representation language. The representation language has syntax and semantics. Propositional logic: Proposition symbols and logical connectives. Inference: –Model checking –Inference rules, especially resolution Horn clauses