Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]

Slides:



Advertisements
Similar presentations
Artificial Intelligence 8. The Resolution Method
Advertisements

Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Inference Rules Universal Instantiation Existential Generalization
Standard Logical Equivalences
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
We have seen that we can use Generalized Modus Ponens (GMP) combined with search to see if a fact is entailed from a Knowledge Base. Unfortunately, there.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
For Friday No reading Homework: –Chapter 9, exercise 4 (This is VERY short – do it while you’re running your tests) Make sure you keep variables and constants.
Logic Use mathematical deduction to derive new knowledge.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus Artificial Intelligence Chapter 14. Resolution in the Propositional Calculus.
4 th Nov, Oct 23 rd Happy Deepavali!. 10/23 SAT & CSP.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
10/1  Project 2 will be given out by 10/3 Will likely be on Sudoku solving using CSP techniques  Mid-term will likely be around 10/17.
Inference rules Sound (but incomplete) –Modus Ponens A=>B, A |= B –Modus tollens A=>B,~B |= ~A –Abduction (??) A => B,~A |= ~B –Chaining A=>B,B=>C |= A=>C.
1 Automated Reasoning Introduction to Weak Methods in Theorem Proving 13.1The General Problem Solver and Difference Tables 13.2Resolution Theorem.
Existential Graphs and Davis-Putnam April 3, 2002 Bram van Heuveln Department of Cognitive Science.
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
Chapter 3 Propositional Logic
Prop logic First order predicate logic (FOPC) Prob. Prop. logic Objects, relations Degree of belief First order Prob. logic Objects, relations.
10/28 Homework 3 returned Homework 4 socket opened (No office hours today) Where hard problems are Phase Transition.
CS1502 Formal Methods in Computer Science Lecture Notes 10 Resolution and Horn Sentences.
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
Proof Systems KB |- Q iff there is a sequence of wffs D1,..., Dn such that Dn is Q and for each Di in the sequence: a) either Di is in KB or b) Di can.
Propositional Resolution Computational LogicLecture 4 Michael Genesereth Spring 2005.
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Logical Agents Logic Propositional Logic Summary
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
지식표현 Agent that reason logically
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
First-Order Logic and Inductive Logic Programming.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
Review of Propositional Logic Syntax
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Propositional Logic or how to reason correctly Chapter 8 (new edition) Chapter 7 (old edition)
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Inference in Propositional Logic (and Intro to SAT)
EA C461 – Artificial Intelligence Logical Agent
Resolution in the Propositional Calculus
First-Order Logic and Inductive Logic Programming
Logical Inference: Through Proof to Truth
Biointelligence Lab School of Computer Sci. & Eng.
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Biointelligence Lab School of Computer Sci. & Eng.
Methods of Proof Chapter 7, second half.
Presentation transcript:

Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause] B--... 'cause they're made of... wood? Good! Heh heh. Oh, yeah. Oh. So, how do we tell whether she is made of wood? []. Does wood sink in water? No. No, it floats! It floats! Throw her into the pond! The pond! Throw her into the pond! What also floats in water? Bread! Apples! Uh, very small rocks! ARTHUR: A duck! CROWD: Oooh. BEDEVERE: Exactly. So, logically... VILLAGER #1: If... she... weighs... the same as a duck,... she's made of wood. BEDEVERE: And therefore? VILLAGER #2: A witch! VILLAGER #1: A witch! 9/26

Model-checking by Stochastic Hill-climbing Start with a model (a random t/f assignment to propositions) For I = 1 to max_flips do –If model satisfies clauses then return model –Else clause := a randomly selected clause from clauses that is false in model With probability p whichever symbol in clause maximizes the number of satisfied clauses /*greedy step*/ With probability (1-p) flip the value in model of a randomly selected symbol from clause /*random step*/ Return Failure Remarkably good in practice!! Clauses 1. (p,s,u) 2. (~p, q) 3. (~q, r) 4. (q,~s,t) 5. (r,s) 6. (~s,t) 7. (~s,u) Consider the assignment “all false” -- clauses 1 (p,s,u) & 5 (r,s) are violated --Pick one—say 5 (r,s) [if we flip r, 1 (remains) violated if we flip s, 4,6,7 are violated] So, greedy thing is to flip r we get all false, except r otherwise, pick either randomly But can’t be used for entailment check since hill-climbing cannot prove lack of solutions

Phase Transition in SAT Theoretically we only know that phase transition ratio occurs between 3.26 and Experimentally, it seems to be close to 4.3 (We also have a proof that 3-SAT has sharp threshold)

Progress in nailing the bound.. (just FYI) Not discussed in class

Representation Reasoning

Prop logic First order predicate logic (FOPC) Prob. Prop. logic Objects, relations Degree of belief First order Prob. logic Objects, relations Degree of belief Degree of truth Fuzzy Logic Time First order Temporal logic (FOPC)

Assertions; t/f Epistemological commitment Ontological commitment t/f/u Deg belief facts Facts Objects relations Prop logic Prob prop logic FOPCProb FOPC

 is true in all worlds (rows) Where KB is true…so it is entailed

KB&~  False So, to check if KB entails , negate , add it to the KB, try to show that the resultant (propositional) theory has no solutions (must have to use systematic methods) Proof by model checking

Connection between Entailment and Satisfiability The Boolean Satisfiability problem is closely connected to Propositional entailment –Specifically, propositional entailment is the “conjugate” problem of boolean satisfiability (since we have to show that KB & ~f has no satisfying model to show that KB |= f) Of late, our ability to solve very large scale satisfiability problems has increased quite significantly

Entailment & Satisfiability SAT (boolean satisfiability) problem Given a set of propositions And a set of (CNF) clauses Find a model (an assignment of t/f values to propositions) that satisfies all clauses –k-SAT is a SAT problem where all clauses are length less than or equal to k »SAT is NP-complete; »1-SAT and 2-SAT are polynomial »k-SAT for k> 2 is NP-complete (so 3-SAT is the smallest k-SAT that is NP-Complete) –If we have a procedure for solving SAT problems, we can use it to compute entailment If the sentence S is entailed, if negation of S, when added to the KB, gives a SAT theory that is unsatisfiable (NO MODEL) –CO-NP-Complete –SAT is useful for modeling many other “assignment” problems We will see use of SAT for planning; it can also be used for Graph coloring, n-queens, Scheduling and Circuit verification etc (the last thing makes SAT VERY interesting for Electrical Engineering folks) –Our ability to solve very large scale SAT problems has increased quite phenomenally in the recent years We can solve SAT instances with millions of variables and clauses very easily To use this technology for inference, we will have to consider systematic SAT solvers.

Davis-Putnam-Logeman-Loveland Procedure  detect failure

DPLL Example Clauses (p,s,u) (~p, q) (~q, r) (q,~s,t) (r,s) (~s,t) (~s,u) Pick p; set p=true unit propagation (p,s,u) satisfied (remove) p;(~p,q)  q derived; set q=T (~p,q) satisfied (remove) (q,~s,t) satisfied (remove) q;(~q,r)  r derived; set r=T (~q,r) satisfied (remove) (r,s) satisfied (remove) pure literal elimination in all the remaining clauses, s occurs negative set ~s=True (i.e. s=False) At this point all clauses satisfied. Return p=T,q=T;r=T;s=False s was not Pure in all clauses (only The remaining ones)

Lots of work in SAT solvers DPLL was the first (late 60’s) Circa 1994 came GSAT (hill climbing search for SAT) Circa 1997 came SATZ Circa came RelSAT ~2000 came CHAFF Current best can be found at –

Model-checking by Stochastic Hill-climbing Start with a model (a random t/f assignment to propositions) For I = 1 to max_flips do –If model satisfies clauses then return model –Else clause := a randomly selected clause from clauses that is false in model With probability p whichever symbol in clause maximizes the number of satisfied clauses /*greedy step*/ With probability (1-p) flip the value in model of a randomly selected symbol from clause /*random step*/ Return Failure Remarkably good in practice!! Clauses 1. (p,s,u) 2. (~p, q) 3. (~q, r) 4. (q,~s,t) 5. (r,s) 6. (~s,t) 7. (~s,u) Consider the assignment “all false” -- clauses 1 (p,s,u) & 5 (r,s) are violated --Pick one—say 5 (r,s) [if we flip r, 1 (remains) violated if we flip s, 4,6,7 are violated] So, greedy thing is to flip r we get all false, except r otherwise, pick either randomly

Progress in nailing the bound.. (just FYI) Not discussed in class

Inference rules Sound (but incomplete) –Modus Ponens A=>B, A |= B –Modus tollens A=>B,~B |= ~A –Abduction (??) A => B,~A |= ~B –Chaining A=>B,B=>C |= A=>C Complete (but unsound) –“Python” logic How about SOUND & COMPLETE? --Resolution (needs normal forms)

If WMDs are found, the war is justified W=>J If WMDs are not found, the war is still justified ~W=>J Is the war justified anyway? |= J? Can Modus Ponens derive it? Need something that does case analysis

If WMDs are found, the war is justified W=>J If WMDs are not found, the war is still justified ~W=>J Is the war justified anyway? |= J? Can Modus Ponens derive it? Need something that does case analysis

Forward apply resolution steps until the fact f you want to prove appears as a resolvent Backward (Resolution Refutation) Add negation of the fact f you want to derive to KB apply resolution steps until you derive an empty clause

Don’t need to use other equivalences if we use resolution in refutation style ~J ~W ~ W V J W V J J If WMDs are found, the war is justified ~W V J If WMDs are not found, the war is still justified W V J Is the war justified anyway? |= J? J V J =J

Don’t need to use other equivalences if we use resolution in refutation style ~J ~W ~ W V J W V J W V ~W ~W J If WMDs are found, the war is justified ~W V J If WMDs are not found, the war is still justified W V J Either WMDs are found or they are not found W V ~W Is the war justified anyway? |= J? W V J J V J =J Resolution does case analysis

Prolog without variables and without the cut operator Is doing horn-clause theorem proving For any KB in horn form, modus ponens is a sound and complete inference Aka the product of sums form From CSE/EEE 120 Aka the sum of products form

Conversion to CNF form CNF clause= Disjunction of literals –Literal = a proposition or a negated proposition –Conversion: Remove implication Pull negation in Use demorgans laws to distribute disjunction over conjunction Separate conjunctions into clauses ANY propositional logic sentence can be converted into CNF form Try: ~(P&Q)=>~(R V W)

Need for resolution Yankees win, it is Destiny ~YVD Dbacks win, it is Destiny ~Db V D Yankees or Dbacks win Y V Db Is it Destiny either way? |= D? Can Modus Ponens derive it? Not until Sunday, when Db won DVY DVD == D Resolution does case analysis Don’t need to use other equivalences if we use resolution in refutation style ~D ~Y ~Y V D ~Db V D Y V Db Db D

Mad chase for empty clause… You must have everything in CNF clauses before you can resolve –Goal must be negated first before it is converted into CNF form Goal (the fact to be proved) may become converted to multiple clauses (e.g. if we want to prove P V Q, then we get two clauses ~P ; ~Q to add to the database Resolution works by resolving away a single literal and its negation –PVQ resolved with ~P V ~Q is not empty! In fact, these clauses are not inconsistent (P true and Q false will make sure that both clauses are satisfied) –PVQ is negation of ~P & ~Q. The latter will become two separate clauses--~P, ~Q. So, by doing two separate resolutions with these two clauses we can derive empty clause

Steps in Resolution Refutation Consider the following problem –If the grass is wet, then it is either raining or the sprinkler is on GW => R V SP ~GW V R V SP –If it is raining, then Timmy is happy R => TH ~R V TH –If the sprinklers are on, Timmy is happy SP => TH ~SP V TH –If timmy is happy, then he sings TH => SG ~TH V SG –Timmy is not singing ~SG –Prove that the grass is not wet |= ~GW? GW R V SP TH V SP SG V SP SP TH SG Is there search in inference? Yes!! Many possible inferences can be done Only few are actually relevant --Idea: Set of Support At least one of the resolved clauses is a goal clause, or a descendant of a clause derived from a goal clause -- Used in the example here!!

Search in Resolution Convert the database into clausal form D c Negate the goal first, and then convert it into clausal form D G Let D = D c + D G Loop –Select a pair of Clauses C1 and C2 from D Different control strategies can be used to select C1 and C2 to reduce number of resolutions tries –Idea 1: Set of Support: Either C1 or C2 must be either the goal clause or a clause derived by doing resolutions on the goal clause (*COMPLETE*) –Idea 2: Linear input form: Either C1 or C2 must be one of the clauses in the input KB (*INCOMPLETE*) –Resolve C1 and C2 to get C12 –If C12 is empty clause, QED!! Return Success (We proved the theorem; ) –D = D + C12 –End loop If we come here, we couldn’t get empty clause. Return “Failure” –Finiteness is guaranteed if we make sure that: we never resolve the same pair of clauses more than once; AND we use factoring, which removes multiple copies of literals from a clause (e.g. QVPVP => QVP)

Complexity of Propositional Inference Any sound and complete inference procedure has to be Co-NP- Complete (since model-theoretic entailment computation is Co-NP- Complete (since model-theoretic satisfiability is NP-complete)) Given a propositional database of size d –Any sentence S that follows from the database by modus ponens can be derived in linear time If the database has only HORN sentences (sentences whose CNF form has at most one +ve clause; e.g. A & B => C), then MP is complete for that database. –PROLOG uses (first order) horn sentences –Deriving all sentences that follow by resolution is Co-NP- Complete (exponential) Anything that follows by unit-resolution can be derived in linear time. –Unit resolution: At least one of the clauses should be a clause of length 1

Compiling Planning into SAT Init: At-R-E-0 & At-A-E-0 & At-B-E-0 Goal: In-A-1 & In-B-1 Graph: “cond at k => one of the supporting actions at k-1” In-A-1 => Load-A-1 In-B-1 => Load-B-1 At-R-M-1 => Fly-R-1 At-R-E-1 => P-At-R-E-1 Load-A-1 => At-R-E-0 & At-A-E-0 “Actions => preconds” Load-B-1 => At-R-E-0 & At-B-E-0 P-At-R-E-1 => At-R-E-0h ~In-A-1 V ~ At-R-M-1 ~In-B-1 V ~At-R-M-1 “Mutexes” Goals: In(A),In(B) One way of finding a k-length plan is to grow a k-length planning graph (with mutexes) and looking for a valid subgraph of this graph. If it is not found, extend the graph and try again