1 Inference in first-order logic Chapter 9. 2 Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.

Slides:



Advertisements
Similar presentations
Inference in first-order logic
Advertisements

Artificial Intelligence 8. The Resolution Method
Inference in first-order logic
First-Order Logic.
Inference Rules Universal Instantiation Existential Generalization
Standard Logical Equivalences
ITCS 3153 Artificial Intelligence Lecture 15 First-Order Logic Chapter 9 Lecture 15 First-Order Logic Chapter 9.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
We have seen that we can use Generalized Modus Ponens (GMP) combined with search to see if a fact is entailed from a Knowledge Base. Unfortunately, there.
For Friday No reading Homework: –Chapter 9, exercise 4 (This is VERY short – do it while you’re running your tests) Make sure you keep variables and constants.
1 Resolution in First Order Logic CS 171/271 (Chapter 9, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Inference in FOL All rules of inference for propositional logic apply to first-order logic We just need to reduce FOL sentences to PL sentences by instantiating.
First-Order Logic Inference
1 FOL Resolution based Inferencing Resolution based rules studied earlier can lead to inferences, rules such as modus ponen, unit resolution etc… Other.
CSCE 580 Artificial Intelligence Ch.9: Inference in First-Order Logic
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Spring 2004.
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: FOL Inference (Reading R&N: Chapter 8)
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Fall 2004.
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Spring 2005.
1 Inference in First-Order Logic. 2 Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining.
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
CHAPTER 9 Oliver Schulte Inference in First-Order Logic.
INFERENCE IN FIRST-ORDER LOGIC IES 503 ARTIFICIAL INTELLIGENCE İPEK SÜĞÜT.
Inference in First-Order Logic CS 271: Fall 2007 Instructor: Padhraic Smyth.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 26, 2012.
Logic review A summary of chapters 7, 8, 9 of Artificial Intelligence: A modern approach by Russel and Norving.
Inference in first-order logic I CIS 391 – Introduction to Artificial Intelligence AIMA Chapter (through p. 278) Chapter 9.5 (through p. 300)
Inference in First-Order logic Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
For Wednesday Read chapter 10 Prolog Handout 4. Exam 1 Monday Take home due at the exam.
March 14, 2006AI: Chapter 9: Inference in First- Order Logic 1 Artificial Intelligence Chapter 9: Inference in First- Order Logic Michael Scherger Department.
Inference in first- order logic. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining.
Computing & Information Sciences Kansas State University Friday, 29 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 16 of 42 Friday, 29 September.
Inference in First-Order Logic 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Inference in First-Order Logic CS 271: Fall 2009.
1 Inference in First Order Logic CS 171/271 (Chapter 9) Some text and images in these slides were drawn from Russel & Norvig’s published material.
5/18/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 13, 5/18/2005 University of Washington, Department of Electrical Engineering Spring.
1 First Order Logic (Syntax, Semantics and Inference)
An Introduction to Artificial Intelligence – CE Chapter 9 - Inference in first-order logic Ramin Halavati In which we define.
1 First Order Logic CS 171/271 (Chapters 8 and 9) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1/19 Inference in first-order logic Chapter 9- Part2 Modified by Vali Derhami.
For Wednesday Finish reading chapter 10 – can skip chapter 8 No written homework.
Inference in first-order logic
Inference in First Order Logic. Outline Reducing first order inference to propositional inference Unification Generalized Modus Ponens Forward and backward.
First-Order Logic Reading: C. 8 and C. 9 Pente specifications handed back at end of class.
Inference in First-Order Logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
Inference in first-order logic
Inference in First-Order Logic
Introduction to Artificial Intelligence
Inference in First-Order Logic
Inference in first-order logic part 1
Artificial intelli-gence 1:Inference in first-order logic
Artificial Intelligence
Introduction to Artificial Intelligence
Artificial intelligence: Inference in first-order logic
Artificial Intelligence Inference in First Order Logic
Notes 9: Inference in First-order logic
Inference in first-order logic
Inference in first-order logic part 1
Inference in First Order Logic (1)
First-Order Logic Inference
First-Order Logic Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

1 Inference in first-order logic Chapter 9

2 Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining Backward chaining Resolution

3 Universal instantiation (UI) Every instantiation of a universally quantified sentence is entailed by it:  v α Subst({v/g}, α) for any variable v and ground term g E.g.,  x King(x)  Greedy(x)  Evil(x) is equal to: King(John)  Greedy(John)  Evil(John) King(Richard)  Greedy(Richard)  Evil(Richard) King(Father(John))  Greedy(Father(John))  Evil(Father(John)).

4 Existential instantiation (EI) For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base:  v α Substitute({v/k}, α) where v=variable, k=constant E.g.,  x Crown(x)  OnHead(x,John) is equivalent to: Crown(C 1 )  OnHead(C 1,John) provided C 1 is a new constant symbol, called a Skolem constant

5 Skolem Constant: allows you to remove  Skolem Constant There exists a good teacher –  x Teacher(x)  Good(x) Teacher(C)  Good (C) Everyone has a mother –  x Person(x)   y Mother(y,x) –Introduce Skolem Function: m(x) Person(x)  Mother (m(x), x))

6 Reduction to propositional logic Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB, it is entailed by a finite subset of the propositionalized KB (thus, you don’t need to search too far for the subset  it always exists in finite size…) Idea: For n = 0 to ∞ do create a propositional KB by instantiating with depth-$n$ terms see if α is entailed by this KB Problem: works if α is entailed, loops if α is not entailed Theorem: Turing (1936), Church (1936) Entailment for FOL is semidecidable (algorithms exist that say yes to every entailed sentence, but no algorithm exists that also says no to every nonentailed sentence.)

7 Problems with propositionalization Propositionalization seems to generate lots of irrelevant sentences. E.g., from:  x King(x)  Greedy(x)  Evil(x) King(John)  y Greedy(y) Brother(Richard,John) it seems obvious that Evil(John), but propositionalization produces lots of facts such as Greedy(Richard) that are irrelevant  not efficient With p k-ary predicates and n constants, there are p·n k instantiations. Therefore, we need to find another way to prove things…

8 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ α β θ Knows(John,x) Knows(John,Jane) Knows(John,x)Knows(y,OJ) Knows(John,x) Knows(y,Mother(y)) Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(z 17,OJ)

9 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ α β θ Knows(John,x) Knows(John,Jane) {x/Jane}} Knows(John,x)Knows(y,OJ) Knows(John,x) Knows(y,Mother(y)) Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(z 17,OJ)

10 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ α β θ Knows(John,x) Knows(John,Jane) {x/Jane}} Knows(John,x)Knows(y,OJ) {x/OJ,y/John}} Knows(John,x) Knows(y,Mother(y)) Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(z 17,OJ)

11 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ p q θ Knows(John,x) Knows(John,Jane) {x/Jane}} Knows(John,x)Knows(y,OJ) {x/OJ,y/John}} Knows(John,x) Knows(y,Mother(y)){y/John,x/Mother(John)}} Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(z 17,OJ)

12 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ α β θ Knows(John,x) Knows(John,Jane) {x/Jane}} Knows(John,x)Knows(y,OJ) {x/OJ,y/John}} Knows(John,x) Knows(y,Mother(y)){y/John,x/Mother(John)}} Knows(John,x)Knows(x,OJ) {fail}

13 Unification To unify Knows(John,x) and Knows(y,z), θ = {y/John, x/z } or θ = {y/John, x/John, z/John} The first unifier is more general than the second. There is a single most general unifier (MGU) that is unique up to renaming of variables. MGU = { y/John, x/z } Dfn of MGU: theta is a MGU of alpha and beta, if you cannot find a gamma, such that …

14 Unify these {P(x,z,y), P(w,u,w), P(A,u,u)} {P(f(A),x), P(x,A)}

15 The unification algorithm

16 The unification algorithm

17 Resolution: FOL version Full first-order version: l 1  ···  l k, m 1  ···  m n ( l 1  ···  l i-1  l i+1  ···  l k  m 1  ···  m j-1  m j+1  ···  m n )θ where Unify ( l i,  m j ) = θ. The two clauses are assumed to be standardized apart so that they share no variables. For example,  Rich(x)  Unhappy(x) Rich(Ken) Unhappy(Ken) with θ = {x/Ken} Resolution Refutation: –Apply resolution steps to CNF(KB   α) and infer contradiction (empty clause) –complete for FOL

18 Convert to CNF: conjunctive normal form Also known as clausal form All quantifiers are eliminated All implications are eliminated Clausal form: –each clause is a set of literals –Each literal is an atom (proposition) or negated atom –All clauses are univresally quantified by default –All existentials are replaced by skolems Steps…

19 Steps to Clausal Form 1.Eliminate implications 2.Reduce scope of negations 3.Standardize variables apart {P(x,y), Q(x,C)}, {P(x,u), G(y)}  {P(x1,y1), Q(x1,C)}, {P(x2,u), G(y2)} 4.Eliminate existential quantifiers using skolems 5.Move all quantifiers to the left 6.Drop the prefix (foralls) 7.Distribute AND and OR’s using De Morgan’s law 8.Split Conjunctions into clauses

20 Try these Everyone who loves all animals is loved by someone: Cats are the only animals Nobody loves John Prove that –John does not like some cat (well, what if no cats exist? So, cannot prove this strong statement…) –John likes no cat (!)

21 Conversion to CNF Everyone who loves all animals is loved by someone:  x [  y Animal(y)  Loves(x,y)]  [  y Loves(y,x)] 1. Eliminate biconditionals and implications  x [  y  Animal(y)  Loves(x,y)]  [  y Loves(y,x)] 2. Move  inwards:  x p ≡  x  p,   x p ≡  x  p  x [  y  (  Animal(y)  Loves(x,y))]  [  y Loves(y,x)]  x [  y  Animal(y)   Loves(x,y)]  [  y Loves(y,x)]  x [  y Animal(y)   Loves(x,y)]  [  y Loves(y,x)]

22 Conversion to CNF contd. 3.Standardize variables: each quantifier should use a different one  x [  y Animal(y)   Loves(x,y)]  [  z Loves(z,x)] 4.Skolemize: a more general form of existential instantiation. Each existential variable is replaced by a Skolem function of the enclosing universally quantified variables:  x [Animal(F(x))   Loves(x,F(x))]  Loves(G(x),x) 5.Drop universal quantifiers: [Animal(F(x))   Loves(x,F(x))]  Loves(G(x),x) 6.Distribute  over  : [Animal(F(x))  Loves(G(x),x)]  [  Loves(x,F(x))  Loves(G(x),x)]

23 An example from textbook (page 280, bottom of page) English: The law says that it is a crime for an American to sell weapons to hostile nations. Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is an American. A(x) and W(y) and Sells(x,y,z) and H(z)  C(x) A(West) M(x)  W(x) M(M1) M(x) and O(Nono,x)  Sells(W,x, Nono) Owns(Nono,M1) E(x,America)  H(x) E(Nono,,America) This is known as a datalog since there is no function symbols. Definite Clause: A1 and A2 and … An  B, where all A1, A2.. B are positive terms (page 217) Prove that West is a criminal –C(West)

24 Resolution proof: definite clauses

25 Example Assuming the following facts Steve likes all easy courses. Science courses are hard. All the courses in the AI department are easy Use Resolution Theorem Proving, show: Steve likes some courses in AI department –What is wrong? AI department may be empty! Try this: Steve likes all courses in AI department.

26 Another Example (page 298) Jack owns a dog. Every dog owner is an animal lover. No animal lover kills an animal. Either Jack or Curiosity killed the cat, who is named Tuna. Did Curiosity kill the cat? (solution: example.html) example.html

27 Finding Answers in resolution theorem proving Students who take a course like that course John is a student John is taking comp221 Finding answer: who likes comp221?   x. (Likes (x, comp221)  Answer (x))

28 Finding Answers Animals can outrun any animals that they eat Carnivores eat other animals Outrunning is transitive: if x outruns y, and y outruns z, then x can outrun z. Lions eat zebras Zebras can outrun dogs Dogs are carnivores Use resolution to find three animals that lions can outrun

29 Logic programming: Prolog Algorithm = Logic + Control Basis: backward chaining with Horn clauses + bells & whistles –Horn clause: a clause where at most one terms is positive Widely used in Europe, Japan (basis of 5th Generation project) Program = set of clauses = head :- literal 1, … literal n. criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z). Same as (criminal(X)  american(X), weapon(Y), sells(X,Y,Z), hostile(Z) ) Depth-first, left-to-right backward chaining Built-in predicates for arithmetic etc., e.g., X is Y*Z+3 Built-in predicates that have side effects (e.g., input and output predicates, assert/retract predicates) Closed-world assumption ("negation as failure") –e.g., given alive(X) :- not dead(X). –alive(joe) succeeds if dead(joe) fails

30 Negation as Failure (page 357) If you cannot prove that P holds, then “not P” is true –Called ("negation as failure") –a jump to a new conclusion –Closed-world assumption (the world consists of finite true facts; all others are false) –e.g., given alive(X) :- not dead(X). –alive(joe) succeeds if dead(joe) fails Example –John is a student –John takes the AI course –John takes the database course –Prove that John does not take the English course

31 Prolog Appending two lists to produce a third: append([],Y,Y). append([X|L],Y,[X|Z]) :- append(L,Y,Z). query: append(A,B,[1,2]) ? answers: A=[] B=[1,2] A=[1] B=[2] A=[1,2] B=[]