Inference in first- order logic. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining.

Slides:



Advertisements
Similar presentations
Inference in first-order logic
Advertisements

Inference in first-order logic
First-Order Logic.
Inference Rules Universal Instantiation Existential Generalization
ITCS 3153 Artificial Intelligence Lecture 15 First-Order Logic Chapter 9 Lecture 15 First-Order Logic Chapter 9.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
1 Resolution in First Order Logic CS 171/271 (Chapter 9, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Inference in FOL All rules of inference for propositional logic apply to first-order logic We just need to reduce FOL sentences to PL sentences by instantiating.
First-Order Logic Inference
1 FOL CS 331/531 Dr M M Awais Unification “It is an algorithm for determining the substitutions needed to make two predicate calculus expressions match”
1 FOL Resolution based Inferencing Resolution based rules studied earlier can lead to inferences, rules such as modus ponen, unit resolution etc… Other.
CSCE 580 Artificial Intelligence Ch.9: Inference in First-Order Logic
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Spring 2004.
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: FOL Inference (Reading R&N: Chapter 8)
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Fall 2004.
Logic. Knowledge-based agents Knowledge base (KB) = set of sentences in a formal language Declarative approach to building an agent (or other system):
Inference in FOL Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 9 Spring 2005.
1 Inference in First-Order Logic. 2 Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining.
Cooperating Intelligent Systems Inference in first-order logic Chapter 9, AIMA.
CHAPTER 9 Oliver Schulte Inference in First-Order Logic.
INFERENCE IN FIRST-ORDER LOGIC IES 503 ARTIFICIAL INTELLIGENCE İPEK SÜĞÜT.
Inference in First-Order Logic CS 271: Fall 2007 Instructor: Padhraic Smyth.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 26, 2012.
Logic review A summary of chapters 7, 8, 9 of Artificial Intelligence: A modern approach by Russel and Norving.
Inference in first-order logic I CIS 391 – Introduction to Artificial Intelligence AIMA Chapter (through p. 278) Chapter 9.5 (through p. 300)
1 Inference in first-order logic Chapter 9. 2 Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
Inference in first-order logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
First-Order Logic Semantics Reading: Chapter 8, , FOL Syntax and Semantics read: FOL Knowledge Engineering read: FOL.
CS 416 Artificial Intelligence Lecture 12 First-Order Logic Chapter 9 Lecture 12 First-Order Logic Chapter 9.
March 14, 2006AI: Chapter 9: Inference in First- Order Logic 1 Artificial Intelligence Chapter 9: Inference in First- Order Logic Michael Scherger Department.
Computing & Information Sciences Kansas State University Friday, 29 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 16 of 42 Friday, 29 September.
Inference in First-Order Logic 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호
Logic (Chapters 7-9).
1 Inference in First-Order Logic CS 271: Fall 2009.
1 Inference in First Order Logic CS 171/271 (Chapter 9) Some text and images in these slides were drawn from Russel & Norvig’s published material.
5/18/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 13, 5/18/2005 University of Washington, Department of Electrical Engineering Spring.
1 First Order Logic (Syntax, Semantics and Inference)
An Introduction to Artificial Intelligence – CE Chapter 9 - Inference in first-order logic Ramin Halavati In which we define.
1 First Order Logic CS 171/271 (Chapters 8 and 9) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1/19 Inference in first-order logic Chapter 9- Part2 Modified by Vali Derhami.
Inference in first-order logic
Inference in First Order Logic. Outline Reducing first order inference to propositional inference Unification Generalized Modus Ponens Forward and backward.
First-Order Logic Reading: C. 8 and C. 9 Pente specifications handed back at end of class.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Inference in First-Order Logic Chapter 9. Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward.
1 Chaining in First Order Logic CS 171/271 (Chapter 9, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Presented By S.Yamuna AP/CSE
Inference in first-order logic
Inference in First-Order Logic
Knowledge Representation & Reasoning
Introduction to Artificial Intelligence
Inference in First-Order Logic
Inference in first-order logic part 1
Artificial intelli-gence 1:Inference in first-order logic
Artificial Intelligence
Introduction to Artificial Intelligence
Artificial intelligence: Inference in first-order logic
Artificial Intelligence Inference in First Order Logic
Notes 9: Inference in First-order logic
Inference in first-order logic
Inference in first-order logic part 1
Inference in First Order Logic (1)
First-Order Logic Inference
First-Order Logic Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Inference in first- order logic

Outline Reducing first-order inference to propositional inference Unification Generalized Modus Ponens Forward chaining Backward chaining Resolution

Universal instantiation (UI) Every instantiation of a universally quantified sentence is entailed by it:  v α Subst({v/g}, α) for any variable v and ground term g E.g.,  x King(x)  Greedy(x)  Evil(x) yields: King(John)  Greedy(John)  Evil(John) King(Richard)  Greedy(Richard)  Evil(Richard) King(Father(John))  Greedy(Father(John))  Evil(Father(John)).

Existential instantiation (EI) For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base:  v α Subst({v/k}, α) E.g.,  x Crown(x)  OnHead(x,John) yields: Crown(C 1 )  OnHead(C 1,John) provided C 1 is a new constant symbol, called a Skolem constant

Reduction to propositional inference Suppose the KB contains just the following:   x King(x)  Greedy(x)  Evil(x)  King(John)  Greedy(John)  Brother(Richard,John) Instantiating the universal sentence in all possible ways, we have:  King(John)  Greedy(John)  Evil(John)  King(Richard)  Greedy(Richard)  Evil(Richard)  King(John)  Greedy(John)  Brother(Richard,John) The new KB is propositionalized: proposition symbols are  King(John), Greedy(John), Evil(John), King(Richard), etc.

Reduction contd. Every FOL KB can be propositionalized so as to preserve entailment  A ground sentence is entailed by new KB iff entailed by original KB Idea: propositionalize KB and query, apply resolution, return result Problem: with function symbols, there are infinitely many ground terms,  e.g., Father(Father(Father(John)))

Reduction contd. Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB, it is entailed by a finite subset of the propositionalized KB Idea: For n = 0 to ∞ do  create a propositional KB by instantiating with depth-n terms  see if α is entailed by this KB Problem: works if α is entailed, loops if α is not entailed Theorem: Turing (1936), Church (1936) Entailment for FOL is semidecidable algorithms exist that say yes to every entailed sentence no algorithm exists that also says no to every nonentailed sentence.

Problems with propositionalization Propositionalization seems to generate lots of irrelevant sentences.  Example from:   x King(x)  Greedy(x)  Evil(x)  King(John)   y Greedy(y)  Brother(Richard,John) it seems obvious that Evil(John), but propositionalization produces lots of facts such as Greedy(Richard) that are irrelevant With p k-ary predicates and n constants, there are p·n k instantiations.

Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)  θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ p q θ Knows(John,x) Knows(John,Jane) Knows(John,x)Knows(y,OJ) Knows(John,x) Knows(y,Mother(y)) Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(z 17,OJ)

Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)  θ = {x/John,y/John} works Unification finds substitutions that make different logical expressions look identical  UNIFY takes two sentences and returns a unifier for them, if one exists  UNIFY(p,q) =  where SUBST( ,p) = SUBST ( ,q) Basically, find a  that makes the two clauses look alike

Unification Examples UNIFY(Knows(John,x), Knows(John,Jane)) = {x/Jane} UNIFY(Knows(John,x), Knows(y,Bill)) = {x/Bill, y/John} UNIFY(Knows(John,x), Knows(y,Mother(y))= {y/John, x/Mother(John) UNIFY(Knows(John,x), Knows(x,Elizabeth)) = fail  Last example fails because x would have to be both John and Elizabeth  We can avoid this problem by standardizing: The two statements now read  UNIFY(Knows(John,x), Knows(z,Elizabeth)) This is solvable:  UNIFY(Knows(John,x), Knows(z,Elizabeth)) = {x/Elizabeth,z/John}

Unification To unify Knows(John,x) and Knows(y,z)  Can use θ = {y/John, x/z }  Or θ = {y/John, x/John, z/John} The first unifier is more general than the second. There is a single most general unifier (MGU) that is unique up to renaming of variables. MGU = { y/John, x/z }

Unification Unification algorithm:  Recursively explore the two expressions side by side  Build up a unifier along the way  Fail if two corresponding points do not match

The unification algorithm

Simple Example Brother(x,John)  Father(Henry,y)  Mother(z, John) Brother(Richard,x)  Father(y,Richard)  Moth er(Eleanore,x)

Generalized Modus Ponens (GMP) p 1 ', p 2 ', …, p n ', ( p 1  p 2  …  p n  q) qθ p 1 ' is King(John) p 1 is King(x) p 2 ' is Greedy(y) p 2 is Greedy(x) θ is {x/John,y/John} q is Evil(x) q θ is Evil(John) GMP used with KB of definite clauses (exactly one positive literal) All variables assumed universally quantified where p i 'θ = p i θ for all i

Soundness of GMP Need to show that p 1 ', …, p n ', (p 1  …  p n  q) ╞ qθ provided that p i 'θ = p i θ for all I Lemma: For any sentence p, we have p ╞ pθ by UI 1. (p 1  …  p n  q) ╞ (p 1  …  p n  q)θ = (p 1 θ  …  p n θ  qθ) 2. p 1 ', \; …, \;p n ' ╞ p 1 '  …  p n ' ╞ p 1 'θ  …  p n 'θ 3. From 1 and 2, qθ follows by ordinary Modus Ponens

Storage and Retrieval Use TELL and ASK to interact with Inference Engine  Implemented with STORE and FETCH STORE(s) stores sentence s FETCH(q) returns all unifiers that the query q unifies with Example:  q = Knows(John,x)  KB is: Knows(John,Jane), Knows(y,Bill), Knows(y,Mother(y))  Result is  1 ={x/Jane},  2 = ,  3 = {John/y,x/Mother(y)}

Storage and Retrieval First approach: Create a long list of all propositions in Knowledge Base Attempt unification with all propositions in KB  Works, but is inefficient Need to restrict unification attempts to sentences that have some chance of unifying  Index facts in KB Predicate Indexing  Index predicates:  All “Knows” sentences in one bucket  All “Loves” sentences in another Use Subsumption Lattice (see below)

Storing and Retrieval Subsumption Lattice  Child is obtained from parent through a single substitution  Lattice contains all possible queries that can be unified with it. Works well for small lattices Predicate with n arguments has a 2 n lattice  Structure of lattice depends on whether the base contains repeated variables Knows(John,John) Knows(x,John)Knows(x,x)Knows(John,x) Knows(x,y)

Forward Chaining  Idea: Start with atomic sentences in the KB Apply Modus Ponens  Add new atomic sentences until no further inferences can be made  Works well for a KB consisting of Situation  Response clauses when processing newly arrived data

Forward Chaining First Order Definite Clauses  Disjunctions of literals of which exactly one is positive:  Example: King(x)  Greedy(x)  Evil(x) King(John) Greedy(y)  First Order Definite Clauses can include variables Variables are assumed to be universally quantified  Greedy(y) means  y Greedy(y)  Not every KB can be converted into first definite clauses

Example knowledge base The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American. Prove that Col. West is a criminal

Example knowledge base contd.... it is a crime for an American to sell weapons to hostile nations: American(x)  Weapon(y)  Sells(x,y,z)  Hostile(z)  Criminal(x) Nono … has some missiles, i.e.,  x Owns(Nono,x)  Missile(x): Owns(Nono,M 1 ) and Missile(M 1 ) … all of its missiles were sold to it by Colonel West Missile(x)  Owns(Nono,x)  Sells(West,x,Nono) Missiles are weapons: Missile(x)  Weapon(x) An enemy of America counts as "hostile“: Enemy(x,America)  Hostile(x) West, who is American … American(West) The country Nono, an enemy of America … Enemy(Nono,America)

Forward chaining algorithm

Forward chaining proof

Properties of forward chaining Sound and complete for first-order definite clauses Datalog = first-order definite clauses + no functions FC terminates for Datalog in finite number of iterations May not terminate in general if α is not entailed This is unavoidable: entailment with definite clauses is semidecidable

Efficiency of forward chaining Incremental forward chaining: no need to match a rule on iteration k if a premise wasn't added on iteration k-1  match each rule whose premise contains a newly added positive literal Matching itself can be expensive: Database indexing allows O(1) retrieval of known facts  e.g., query Missile(x) retrieves Missile(M 1 ) Forward chaining is widely used in deductive databases

Hard matching example Colorable() is inferred iff the CSP has a solution CSPs include 3SAT as a special case, hence matching is NP-hard Diff(wa,nt)  Diff(wa,sa)  Diff(nt,q)  Diff(nt,sa)  Diff(q,nsw)  Diff(q,sa)  Diff(nsw,v)  Diff(nsw,sa)  Diff(v,sa)  Colorable() Diff(Red,Blue) Diff (Red,Green) Diff(Green,Red) Diff(Green,Blue) Diff(Blue,Red) Diff(Blue,Green)

Backward Chaining Improves on Forward Chaining by not making irrelevant conclusions  Alternatives to backward chaining: restrict forward chaining to a relevant set of forward rules Rewrite rules so that only relevant variable bindings are made:  Use a magic set  Example:  Rewrite rule: Magic(x)  Ami(x)  Weapon(x)  Sells(x,y,z)  Hostile(z)  Criminal(x)  Add fact: Magic(West)

Backward Chaining Idea:  Given a query, find all substitutions that satisfy the query.  Algorithm: Work on lists of goals, starting with original query Algo finds every clause in the KB that unifies with the positive literal (head) and adds remainder (body) to list of goals

Backward chaining algorithm SUBST(COMPOSE(θ 1, θ 2 ), p) = SUBST(θ 2, SUBST(θ 1, p))

Backward chaining example

Properties of backward chaining Depth-first recursive proof search: space is linear in size of proof Incomplete due to infinite loops  fix by checking current goal against every goal on stack Inefficient due to repeated subgoals (both success and failure)  fix using caching of previous results (extra space) Widely used for logic programming

Logic programming: Prolog Algorithm = Logic + Control Basis: backward chaining with Horn clauses + bells & whistles Widely used in Europe, Japan (basis of 5th Generation project) Compilation techniques  60 million LIPS Program = set of clauses:  head :- literal 1, … literal n.  criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z). Depth-first, left-to-right backward chaining  Built-in predicates for arithmetic etc., e.g., X is Y*Z+3  Built-in predicates that have side effects (e.g., input and output  predicates, assert/retract predicates)  Closed-world assumption ("negation as failure") e.g., given alive(X) :- not dead(X). alive(joe) succeeds if dead(joe) fails

Prolog Appending two lists to produce a third: append([],Y,Y). append([X|L],Y,[X|Z]) :- append(L,Y,Z). query: append(A,B,[1,2]) ? answers: A=[] B=[1,2] A=[1] B=[2] A=[1,2] B=[]

Resolution: brief summary Full first-order version: l 1  ···  l k, m 1  ···  m n ( l 1  ···  l i-1  l i+1  ···  l k  m 1  ···  m j-1  m j+1  ···  m n )θ where Unify ( l i,  m j ) = θ. The two clauses are assumed to be standardized apart so that they share no variables. For example,  Rich(x)  Unhappy(x) Rich(Ken) Unhappy(Ken) with θ = {x/Ken} Apply resolution steps to CNF(KB   α); complete for FOL

Conversion to CNF Everyone who loves all animals is loved by someone:  x [  y Animal(y)  Loves(x,y)]  [  y Loves(y,x)] 1. Eliminate biconditionals and implications  x [  y  Animal(y)  Loves(x,y)]  [  y Loves(y,x)] 2. Move  inwards:  x p ≡  x  p,   x p ≡  x  p  x [  y  (  Animal(y)  Loves(x,y))]  [  y Loves(y,x)]  x [  y  Animal(y)   Loves(x,y)]  [  y Loves(y,x)]  x [  y Animal(y)   Loves(x,y)]  [  y Loves(y,x)]

Conversion to CNF contd. 3. Standardize variables: each quantifier should use a different one  x [  y Animal(y)   Loves(x,y)]  [  z Loves(z,x)] 4. Skolemize: a more general form of existential instantiation. Each existential variable is replaced by a Skolem function of the enclosing universally quantified variables:  x [Animal(F(x))   Loves(x,F(x))]  Loves(G(x),x) 5. Drop universal quantifiers: [Animal(F(x))   Loves(x,F(x))]  Loves(G(x),x) 6. Distribute  over  : [Animal(F(x))  Loves(G(x),x)]  [  Loves(x,F(x))  Loves(G(x),x)]

Resolution proof: definite clauses