CS 416 Artificial Intelligence Lecture 9 Logical Agents Chapter 7 Lecture 9 Logical Agents Chapter 7.

Slides:



Advertisements
Similar presentations
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Advertisements

Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
3/30/00 Agents that Reason Logically by Chris Horn Jiansui Yang Xiaojing Wu.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
1 Problem Solving CS 331 Dr M M Awais Representational Methods Formal Methods Propositional Logic Predicate Logic.
ITCS 3153 Artificial Intelligence Lecture 11 Logical Agents Chapter 7 Lecture 11 Logical Agents Chapter 7.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Computing & Information Sciences Kansas State University Lecture 11 of 42 CIS 530 / 730 Artificial Intelligence Lecture 11 of 42 William H. Hsu Department.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 8 Jim Martin.
Knoweldge Representation & Reasoning
LOGICAL AGENTS Yılmaz KILIÇASLAN. Definitions Logical agents are those that can:  form representations of the world,  use a process of inference to.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 5 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
ITCS 3153 Artificial Intelligence Lecture 10 Logical Agents Chapter 7 Lecture 10 Logical Agents Chapter 7.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2008.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
Agents that Reason Logically Logical agents have knowledge base, from which they draw conclusions TELL: provide new facts to agent ASK: decide on appropriate.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
CS 416 Artificial Intelligence Lecture 9 Logical Agents Chapter 7 Lecture 9 Logical Agents Chapter 7.
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
Knowledge Representation Use of logic. Artificial agents need Knowledge and reasoning power Can combine GK with current percepts Build up KB incrementally.
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
For Friday Read chapter 8 Homework: –Chapter 7, exercise 1.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Logical Agents Chapter 7. 2 Logical Agents What are we talking about, “logical?” –Aren’t search-based chess programs logical Yes, but knowledge is used.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Chapter 7.
Artificial Intelligence
CS 416 Artificial Intelligence
Logical Agents Chapter 7.
CS 416 Artificial Intelligence
Knowledge Representation I (Propositional Logic)
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

CS 416 Artificial Intelligence Lecture 9 Logical Agents Chapter 7 Lecture 9 Logical Agents Chapter 7

Assignment due Monday I’ll release binaries that provide more output Explanation for why your output doesn’t exactly match the provided solution’s We request “expand nodes in following order”We request “expand nodes in following order” –Could mean nodes are put on fringe in requested order –Could mean things are pulled from fringe in requested order We’ll be flexible in the gradingWe’ll be flexible in the grading I’ll release binaries that provide more output Explanation for why your output doesn’t exactly match the provided solution’s We request “expand nodes in following order”We request “expand nodes in following order” –Could mean nodes are put on fringe in requested order –Could mean things are pulled from fringe in requested order We’ll be flexible in the gradingWe’ll be flexible in the grading

Where are we? We’ve studied classes of search problems Find the best sequence of actionsFind the best sequence of actions –Uninformed search (BFS, Iterative Deepening) –Informed search (A*) Find the best value of something (possibly a sequence)Find the best value of something (possibly a sequence) –Simulated annealing, genetic algorithms, hill climbing Finding the best action in an adversarial settingFinding the best action in an adversarial setting We’ve studied classes of search problems Find the best sequence of actionsFind the best sequence of actions –Uninformed search (BFS, Iterative Deepening) –Informed search (A*) Find the best value of something (possibly a sequence)Find the best value of something (possibly a sequence) –Simulated annealing, genetic algorithms, hill climbing Finding the best action in an adversarial settingFinding the best action in an adversarial setting

Logical Agents What are we talking about, “logical?” Aren’t search-based chess programs logicalAren’t search-based chess programs logical –Yes, but knowledge is used in a very specific way  Win the game  Not useful for extracting strategies or understanding other aspects of chess We want to develop more general-purpose knowledge systems that support a variety of logical analysesWe want to develop more general-purpose knowledge systems that support a variety of logical analyses What are we talking about, “logical?” Aren’t search-based chess programs logicalAren’t search-based chess programs logical –Yes, but knowledge is used in a very specific way  Win the game  Not useful for extracting strategies or understanding other aspects of chess We want to develop more general-purpose knowledge systems that support a variety of logical analysesWe want to develop more general-purpose knowledge systems that support a variety of logical analyses

Why study knowledge-based agents Partially observable environments combine available information (percepts) with general knowledge to select actionscombine available information (percepts) with general knowledge to select actions Natural Language Language is too complex and ambiguous. Problem-solving agents are impeded by high branching factor.Language is too complex and ambiguous. Problem-solving agents are impeded by high branching factor.Flexibility Knowledge can be reused for novel tasks. New knowledge can be added to improve future performance.Knowledge can be reused for novel tasks. New knowledge can be added to improve future performance. Partially observable environments combine available information (percepts) with general knowledge to select actionscombine available information (percepts) with general knowledge to select actions Natural Language Language is too complex and ambiguous. Problem-solving agents are impeded by high branching factor.Language is too complex and ambiguous. Problem-solving agents are impeded by high branching factor.Flexibility Knowledge can be reused for novel tasks. New knowledge can be added to improve future performance.Knowledge can be reused for novel tasks. New knowledge can be added to improve future performance.

Components of knowledge-based agent Knowledge Base Store informationStore information –knowledge representation language Add information (Tell)Add information (Tell) Retrieve information (Ask)Retrieve information (Ask) Perform inferencePerform inference –derive new sentences (knowledge) from existing sentences Knowledge Base Store informationStore information –knowledge representation language Add information (Tell)Add information (Tell) Retrieve information (Ask)Retrieve information (Ask) Perform inferencePerform inference –derive new sentences (knowledge) from existing sentences

The wumpus world A scary world, indeed A maze in a caveA maze in a cave A wumpus who will eat youA wumpus who will eat you One arrow that can kill the wumpusOne arrow that can kill the wumpus Pits that can entrap you (but not the wumpus for it is too large to fall in)Pits that can entrap you (but not the wumpus for it is too large to fall in) A heap of gold somewhereA heap of gold somewhere A scary world, indeed A maze in a caveA maze in a cave A wumpus who will eat youA wumpus who will eat you One arrow that can kill the wumpusOne arrow that can kill the wumpus Pits that can entrap you (but not the wumpus for it is too large to fall in)Pits that can entrap you (but not the wumpus for it is too large to fall in) A heap of gold somewhereA heap of gold somewhere

But you have sensing and action Sensing (each is either on or off – a single bit) wumpus emits a stench in adjacent squareswumpus emits a stench in adjacent squares pits cause a breeze in adjacent squarespits cause a breeze in adjacent squares gold causes glitter you see when in the squaregold causes glitter you see when in the square walking into wall causes a bumpwalking into wall causes a bump death of wumpus can be heard everywhere in worlddeath of wumpus can be heard everywhere in world Sensing (each is either on or off – a single bit) wumpus emits a stench in adjacent squareswumpus emits a stench in adjacent squares pits cause a breeze in adjacent squarespits cause a breeze in adjacent squares gold causes glitter you see when in the squaregold causes glitter you see when in the square walking into wall causes a bumpwalking into wall causes a bump death of wumpus can be heard everywhere in worlddeath of wumpus can be heard everywhere in world

But you have sensing and action Action You can turn left or right 90 degreesYou can turn left or right 90 degrees You can move forwardYou can move forward You can shoot an arrow in your facing directionYou can shoot an arrow in your facing directionAction You can turn left or right 90 degreesYou can turn left or right 90 degrees You can move forwardYou can move forward You can shoot an arrow in your facing directionYou can shoot an arrow in your facing direction

An example

Our agent played well Used inference to relate two different percepts observed from different locationsUsed inference to relate two different percepts observed from different locations Agent is guaranteed to draw correct conclusions if percepts are correctAgent is guaranteed to draw correct conclusions if percepts are correct Used inference to relate two different percepts observed from different locationsUsed inference to relate two different percepts observed from different locations Agent is guaranteed to draw correct conclusions if percepts are correctAgent is guaranteed to draw correct conclusions if percepts are correct

Knowledge Representation Must be syntactically and semantically correct Syntax the formal specification of how information is storedthe formal specification of how information is stored –a + 2 = c (typical mathematical syntax) –a2y += (not legal syntax for infix (regular math) notation) Semantics the meaning of the informationthe meaning of the information –a + 2 = c (c is a number whose value is 2 more than a) –a + 2 = c (the symbol that comes two after ‘a’ in the alphabet is ‘c’) Must be syntactically and semantically correct Syntax the formal specification of how information is storedthe formal specification of how information is stored –a + 2 = c (typical mathematical syntax) –a2y += (not legal syntax for infix (regular math) notation) Semantics the meaning of the informationthe meaning of the information –a + 2 = c (c is a number whose value is 2 more than a) –a + 2 = c (the symbol that comes two after ‘a’ in the alphabet is ‘c’)

Logical Reasoning Entailment one sentence follows logically from anotherone sentence follows logically from another – –the sentence  entails the sentence   entails  ( ) if and only if for every model in which  is true,  is also true  entails  ( ) if and only if for every model in which  is true,  is also true Model: a description of the world where every relevant sentence has been assigned truth or falsehood Entailment one sentence follows logically from anotherone sentence follows logically from another – –the sentence  entails the sentence   entails  ( ) if and only if for every model in which  is true,  is also true  entails  ( ) if and only if for every model in which  is true,  is also true Model: a description of the world where every relevant sentence has been assigned truth or falsehood

An example After one step in wumpus world Knowledge base (KB) isKnowledge base (KB) is –A set of all game states that are possible after being given:  rules of game  percept sequence How does the KB represent the game?How does the KB represent the game? After one step in wumpus world Knowledge base (KB) isKnowledge base (KB) is –A set of all game states that are possible after being given:  rules of game  percept sequence How does the KB represent the game?How does the KB represent the game?

Building the KB Consider a KB intended to represent the presence of pits in a Wumpus world where [1,1] is clear and [2,1] has a breeze There are three cells with two conditions eachThere are three cells with two conditions each 2 3 = Eight possible models2 3 = Eight possible models According to percepts and rules, KB is well definedAccording to percepts and rules, KB is well defined Consider a KB intended to represent the presence of pits in a Wumpus world where [1,1] is clear and [2,1] has a breeze There are three cells with two conditions eachThere are three cells with two conditions each 2 3 = Eight possible models2 3 = Eight possible models According to percepts and rules, KB is well definedAccording to percepts and rules, KB is well defined

Model Checking The agent wishes to check all models of the game in which a pit is in the three candidate spots Enumerate all models where three candidate spots may have pitsEnumerate all models where three candidate spots may have pits The agent wishes to check all models of the game in which a pit is in the three candidate spots Enumerate all models where three candidate spots may have pitsEnumerate all models where three candidate spots may have pits

Checking entailment Can “  1 :There is no pit in [1, 2]” be true? Enumerate all states where  1 is trueEnumerate all states where  1 is true For all models where KB is true,  1 is true alsoFor all models where KB is true,  1 is true also The KB entails  1The KB entails  1 Can “  1 :There is no pit in [1, 2]” be true? Enumerate all states where  1 is trueEnumerate all states where  1 is true For all models where KB is true,  1 is true alsoFor all models where KB is true,  1 is true also The KB entails  1The KB entails  1

Checking entailment Can “  2 : There is no pit in [2, 2]” be true? Enumerate all states where  2 is trueEnumerate all states where  2 is true For all models where KB is true,  2 is not always true alsoFor all models where KB is true,  2 is not always true also KB does not entail  2KB does not entail  2 Can “  2 : There is no pit in [2, 2]” be true? Enumerate all states where  2 is trueEnumerate all states where  2 is true For all models where KB is true,  2 is not always true alsoFor all models where KB is true,  2 is not always true also KB does not entail  2KB does not entail  2

Logical inference Entailment permitted logic we inferred new knowledge from entailmentswe inferred new knowledge from entailments Inference algorithms The method of logical inference we demonstrated is called model checking because we enumerated all possibilities to find the inferenceThe method of logical inference we demonstrated is called model checking because we enumerated all possibilities to find the inference Entailment permitted logic we inferred new knowledge from entailmentswe inferred new knowledge from entailments Inference algorithms The method of logical inference we demonstrated is called model checking because we enumerated all possibilities to find the inferenceThe method of logical inference we demonstrated is called model checking because we enumerated all possibilities to find the inference

Inference algorithms There are many inference algorithms If an inference algorithm, i, can derive  from KB, we writeIf an inference algorithm, i, can derive  from KB, we write There are many inference algorithms If an inference algorithm, i, can derive  from KB, we writeIf an inference algorithm, i, can derive  from KB, we write

Inference Algorithms Sound Algorithm derives only entailed sentencesAlgorithm derives only entailed sentences An unsound algorithm derives falsehoodsAn unsound algorithm derives falsehoodsComplete inference algorithm can derive any sentence that is entailedinference algorithm can derive any sentence that is entailed –Means inference algorithm cannot become caught in infinite loop Sound Algorithm derives only entailed sentencesAlgorithm derives only entailed sentences An unsound algorithm derives falsehoodsAn unsound algorithm derives falsehoodsComplete inference algorithm can derive any sentence that is entailedinference algorithm can derive any sentence that is entailed –Means inference algorithm cannot become caught in infinite loop

Propositional (Boolean) Logic A very simple language Propositions are TRUE, FALSE, or COMPLETELY UNKNOWNPropositions are TRUE, FALSE, or COMPLETELY UNKNOWN Syntax of allowable sentencesSyntax of allowable sentences –Atomic sentences –Complex sentences –Backus-Naur Form (BNF) Inference – derive new sentences from old onesInference – derive new sentences from old ones Reasoning – patterns of sound inference that find proofsReasoning – patterns of sound inference that find proofs A very simple language Propositions are TRUE, FALSE, or COMPLETELY UNKNOWNPropositions are TRUE, FALSE, or COMPLETELY UNKNOWN Syntax of allowable sentencesSyntax of allowable sentences –Atomic sentences –Complex sentences –Backus-Naur Form (BNF) Inference – derive new sentences from old onesInference – derive new sentences from old ones Reasoning – patterns of sound inference that find proofsReasoning – patterns of sound inference that find proofs

Atomic sentences Syntax indivisible syntactic elements (also called literals)indivisible syntactic elements (also called literals) Use uppercase letters to represent a proposition that can be true or false (W 1,2 means there is a wumpus in [1, 2])Use uppercase letters to represent a proposition that can be true or false (W 1,2 means there is a wumpus in [1, 2]) True and False are predefined propositions where True means always true and False means always falseTrue and False are predefined propositions where True means always true and False means always falseSyntax indivisible syntactic elements (also called literals)indivisible syntactic elements (also called literals) Use uppercase letters to represent a proposition that can be true or false (W 1,2 means there is a wumpus in [1, 2])Use uppercase letters to represent a proposition that can be true or false (W 1,2 means there is a wumpus in [1, 2]) True and False are predefined propositions where True means always true and False means always falseTrue and False are predefined propositions where True means always true and False means always false

Complex sentences Formed from atomic sentences using connectives ~ (or = not): the negation~ (or = not): the negation ^ (and): the conjunction^ (and): the conjunction V (or): the disjunctionV (or): the disjunction => (or = implies): the implication=> (or = implies): the implication  (if and only if): the biconditional  (if and only if): the biconditional Formed from atomic sentences using connectives ~ (or = not): the negation~ (or = not): the negation ^ (and): the conjunction^ (and): the conjunction V (or): the disjunctionV (or): the disjunction => (or = implies): the implication=> (or = implies): the implication  (if and only if): the biconditional  (if and only if): the biconditional (A and ~A are called literals)

Backus-Naur Form (BNF) Order of precedence is: ~, ^, V, =>, 

Propositional (Boolean) Logic Semantics given a particular model (situation), what are the rules that determine the truth of a sentence?given a particular model (situation), what are the rules that determine the truth of a sentence? –m 1 = {P 1,2 =false, P 2,2 =false, P 3,1 =true} –What is ~P 1,2 ^ (P 2,2 V P 3,1 ) What are rules of ~, ^, VWhat are rules of ~, ^, V use a truth table to compute the value of any sentence with respect to a model by recursive evaluationuse a truth table to compute the value of any sentence with respect to a model by recursive evaluationSemantics given a particular model (situation), what are the rules that determine the truth of a sentence?given a particular model (situation), what are the rules that determine the truth of a sentence? –m 1 = {P 1,2 =false, P 2,2 =false, P 3,1 =true} –What is ~P 1,2 ^ (P 2,2 V P 3,1 ) What are rules of ~, ^, VWhat are rules of ~, ^, V use a truth table to compute the value of any sentence with respect to a model by recursive evaluationuse a truth table to compute the value of any sentence with respect to a model by recursive evaluation

Truth table (our semantics)

Example from wumpus A square is breezy if and only if a neighboring square has a pit B 1,1  (P 1,2 V P 2,1 )B 1,1  (P 1,2 V P 2,1 ) A square is breezy if a neighboring square has a pit (P 1,2 V P 2,1 ) => B 1,1 (P 1,2 V P 2,1 ) => B 1,1 Former is more powerful and true to Wumpus rules A square is breezy if and only if a neighboring square has a pit B 1,1  (P 1,2 V P 2,1 )B 1,1  (P 1,2 V P 2,1 ) A square is breezy if a neighboring square has a pit (P 1,2 V P 2,1 ) => B 1,1 (P 1,2 V P 2,1 ) => B 1,1 Former is more powerful and true to Wumpus rules

A wumpus knowledge base Initial conditionsInitial conditions –R 1 : ~P 1,1 no pit in [1,1] Rules of Breezes (for a few example squares)Rules of Breezes (for a few example squares) –R 2 : B 1,1  (P 1,2 V P 2,1 ) –R 3 : B 2,1  (P 1,1 V P 2,1 V P 3,1 ) PerceptsPercepts –R 4 : ~B 1,1 –R 5 : B 2,1 We know: R 1 ^ R 2 ^ R 3 ^ R 4 ^ R 5 Initial conditionsInitial conditions –R 1 : ~P 1,1 no pit in [1,1] Rules of Breezes (for a few example squares)Rules of Breezes (for a few example squares) –R 2 : B 1,1  (P 1,2 V P 2,1 ) –R 3 : B 2,1  (P 1,1 V P 2,1 V P 3,1 ) PerceptsPercepts –R 4 : ~B 1,1 –R 5 : B 2,1 We know: R 1 ^ R 2 ^ R 3 ^ R 4 ^ R 5

Inference Does KB entail  (KB ->  ?) Is there a pit in [1,2]: P 1,2 ?Is there a pit in [1,2]: P 1,2 ? Consider only what we needConsider only what we need –B 1,1 B 2,1 P 1,1 P 1,2 P 2,1 P 2,2 P 3,1 –2 7 permutations of models to check For each model, see if KB is trueFor each model, see if KB is true For all KB = True, see if  is trueFor all KB = True, see if  is true Does KB entail  (KB ->  ?) Is there a pit in [1,2]: P 1,2 ?Is there a pit in [1,2]: P 1,2 ? Consider only what we needConsider only what we need –B 1,1 B 2,1 P 1,1 P 1,2 P 2,1 P 2,2 P 3,1 –2 7 permutations of models to check For each model, see if KB is trueFor each model, see if KB is true For all KB = True, see if  is trueFor all KB = True, see if  is true This algorithm is called Model Checking

Inference Truth table There is no pit in P 1,2

Concepts related to entailment logical equivalence Two sentences a and b are logically equivalent if they are true in the same set of models… a  bTwo sentences a and b are logically equivalent if they are true in the same set of models… a  b validity (or tautology) a sentence that is valid in all modelsa sentence that is valid in all models –P V ~P –deduction theorem: a entails b if and only if a implies b satisfiability a sentence that is true in some modela sentence that is true in some modelunsatisfiable a entails b  (a ^ ~b)a entails b  (a ^ ~b) logical equivalence Two sentences a and b are logically equivalent if they are true in the same set of models… a  bTwo sentences a and b are logically equivalent if they are true in the same set of models… a  b validity (or tautology) a sentence that is valid in all modelsa sentence that is valid in all models –P V ~P –deduction theorem: a entails b if and only if a implies b satisfiability a sentence that is true in some modela sentence that is true in some modelunsatisfiable a entails b  (a ^ ~b)a entails b  (a ^ ~b)

Logical Equivalences Know these equivalences (no need to memorize)

Reasoning w/ propositional logic Inference Rules Modus Ponens:Modus Ponens: –Whenever sentences of form  =>  and  are given the sentence  can be inferred  R 1 : Green => Martian  R 2 : Green  Inferred: Martian Inference Rules Modus Ponens:Modus Ponens: –Whenever sentences of form  =>  and  are given the sentence  can be inferred  R 1 : Green => Martian  R 2 : Green  Inferred: Martian

Reasoning w/ propositional logic Inference Rules And-EliminationAnd-Elimination –Any of conjuncts can be inferred  R 1 : Martian ^ Green  Inferred: Martian  Inferred: Green Use truth tables if you want to confirm inference rules Inference Rules And-EliminationAnd-Elimination –Any of conjuncts can be inferred  R 1 : Martian ^ Green  Inferred: Martian  Inferred: Green Use truth tables if you want to confirm inference rules

Example of a proof of ~P 1,2 ~P ~B B P?

Example of a proof of ~P 1,2 ~P ~B B ~P P?

Constructing a proof Proving is like searching Find sequence of logical inference rules that lead to desired resultFind sequence of logical inference rules that lead to desired result Note the explosion of propositionsNote the explosion of propositions –Good proof methods ignore the countless irrelevant propositions Proving is like searching Find sequence of logical inference rules that lead to desired resultFind sequence of logical inference rules that lead to desired result Note the explosion of propositionsNote the explosion of propositions –Good proof methods ignore the countless irrelevant propositions

Monotonicity of knowledge base Knowledge base can only get larger Adding new sentences to knowledge base can only make it get largerAdding new sentences to knowledge base can only make it get larger –If (KB entails  )  ((KB ^  ) entails  ) This is important when constructing proofsThis is important when constructing proofs –A logical conclusion drawn at one point cannot be invalidated by a subsequent entailment Knowledge base can only get larger Adding new sentences to knowledge base can only make it get largerAdding new sentences to knowledge base can only make it get larger –If (KB entails  )  ((KB ^  ) entails  ) This is important when constructing proofsThis is important when constructing proofs –A logical conclusion drawn at one point cannot be invalidated by a subsequent entailment