Download presentation
Presentation is loading. Please wait.
1
Reasoning in Artificial Intelligence
Resolution Strategies Material from: Logical Foundations of Artificial Intelligence, Genesereth & Nilsson, 1987. Original slides by: Prof. Daniel Lehmann, The Hebrew University of Jerusalem Updated by Bill Davis 2/24/2019
2
Resolution can be very inefficient…
17. { } 4, 9 18. {R} 3, 10 19. { } 8, 10 20. { } 4, 11 21. {R} 2, 12 22. { } 7, 12 23. {R} 3, 13 24. { } 8, 13 25. { } 4, 14 26. {R} 2, 15 27. { } 7, 15 28. { } 4, 16 29. { } 4, 18 30. { } 4, 21 31. { } 4, 23 32. { } 4, 26 1. {P,Q} Δ 2. {¬P,R} Δ 3. {¬Q,R} Δ 4. {¬R} Γ 5. {Q,R} 1, 2 6. {P, R} 1, 3 7. {¬P} 2, 4 8. {¬Q} 3, 4 9. {R} 3, 5 10. {Q} 4, 5 11. {R} 3, 6 12. {P} 4, 6 13. {Q} 1, 7 14. {R} 6, 7 15. {P} 1, 8 16. {R} 5, 8 KB: 1. {P,Q} Δ 2. {¬P,R} Δ 3. {¬Q,R} Δ 4. {¬R} Γ short proof: {1,2,3,4,5,9,17} Example of Unconstrained resolution 2/24/2019
3
Resolution Strategies: key issues
How to choose the next two clauses to resolve? In the worst case we need to generate a very large number of redundant, often irrelevant conclusions. Note: the size of KB grows, so more resolutions! Avoid useless work by not performing certain unnecessary deductions. The goal is to decrease the size of the resolution graph that leads to a conclusion. Can we restrict FOL to obtain an efficient resolution strategy: trade-off expressivity/efficiency? We are not concerned with the order in which the inferences are done, but only with the size of a resolution graph and with ways of decreasing that size by eliminating useless deductions. 2/24/2019
4
Deletion strategies Eliminate unnecessary clauses from KB so as to avoid wasted resolutions: 1. pure-literal elimination 2. tautology elimination 3. subsumption elimination Del strategy Is a restriction technique in which clauses with specified properties are eliminated before they are ever used. The deletions are always sound and complete by definition! 2/24/2019
5
1. Pure literal elimination
Remove any clause containing a “pure literal”—a literal that has no complementary instance in the data base: 1. {¬P, ¬Q, R} 2. {¬P, S} 3. {¬Q, S} 4. {P} 5. {Q} 6. {¬R} “S” is a pure literal, so these 2 clauses can be “thrown away.” Only need to apply this strategy to a database once and do not have to check clause as it is generated This preserves soundness and completeness since we cannot derive the empty clause with pure literals 2/24/2019
6
2. Tautology elimination
Eliminate clauses that contain identical complementary literals -- these are tautologies which do not change the satisfiability of KB Ex1: {P(F(A)),¬P(F(A))} {P(x),Q(y),¬Q(y),R(z)} can all be eliminated! Ex2: {¬P(A), P(x)} cannot be eliminated! {P(A)} {¬P(B)} Literals in a clause must be exact complements for tautology elimination to apply. Example 2: With the first clause the set is unsatisfiable, but without the first clause, this set would be satisfiable 2/24/2019
7
3. Subsumption elimination
Delete all subsumed clauses. A clause F is said to subsume clause Y iff there is a substitution σ such that Fσ Ex: 1. {P(x), Q(y)} subsumes 2. {P(A), Q(v), R(w)} since the substitution σ = {x/A, y/v} makes the first a subset of the second. So we can “throw away” the second. The same set of clauses that resolve 2 will resolve 1. The resolution process itself can produce tautologies and subsuming clauses so we will need to check for tautologies and subsumptions as we perform resolutions. 2/24/2019
8
Horn Clauses Def: A Horn clause is a clause with at most one positive literal {¬p1,¬p2,….,¬pn ,q} Inference with Horn clauses can be done through forward chaining and backward chaining algorithms Advantages: efficient and complete resolution strategies! Trade off expressiveness for efficiency Next Slide: Types of Resolution Strategies 2/24/2019
9
Resolution strategies
How to choose the next two clauses to resolve: Unit resolution Input resolution Linear resolution Set-of-support resolution Directed resolution Are these strategies sound and complete? If not, for what subset of FOL they are? 2/24/2019
10
Unit Resolution (1) When choosing two clauses to resolve, at least one of the clauses being resolved at every step contains a single literal (unit clause). The idea: produce shorter and shorter sentences 7. {Q} 1, 5 8. {P} 1, 6 9. {R} 3, 7 10. { } 6, 7 11. {R} 2, 8 12. { } 5, 8 Ex: 1. {P, Q} Δ 2. {¬P, R} Δ 3. {¬Q, R} Δ 4. {¬R} Γ 5. {¬P} 2, 4 6. {¬Q} 3, 4 Helps focus the search toward producing the empty clause and thereby improves efficiency. 2/24/2019
11
Unit Resolution (2) Unit resolution is refutation complete for Horn clauses but incomplete in general: Ex: {P, Q} Δ 2. {¬P, Q} Δ 3. {P, ¬Q} Δ 4. {¬P, ¬Q} Γ Cannot perform a single unit resolution since all clauses are of size 2! A sound inference rule is refutation complete if it can infer FALSE from every unsatisfiable set. (refutation completeness is not the same as completeness) A complete inference rule can infer all logical consequences, while a refutation complete inference rule can only check (via the unsatisfiability idea) for logical consequence. 2/24/2019
12
Input Resolution At least one of the clauses being resolved at every step is a member of the initial (i.e., input) knowledge base. 1. {P,Q} Δ 2. {¬P,R} Δ 3. {¬Q,R} Δ 4. {¬R} Γ 5. {Q,R} 1, 2 6. {P, R} 1, 3 7. {¬P} 2, 4 8. {¬Q} 3, 4 9. {R} 3, 5 10. {Q} 4, 5 11. {R} 3, 6 12. {P} 4, 6 13. {Q} 1, 7 14. {R} 6, 7 15. {P} 1, 8 16. {R} 5, 8 17. { } 4, 9 18. {R} 3, 10 19. { } 8, 10 20. { } 4, 11 21. {R} 2, 12 22. { } 7, 12 23. {R} 3, 13 24. { } 8, 13 25. { } 4, 14 ↓ ↓ This is a subsection of the unconstrained resolution shown on an earlier slide. Consider clauses 6 and 7. Using unconstrained resolution, these clauses can be resolved to produce clause This is not an input resolution since neither parent is a member of the initial database. The resolution of clauses 1 and 2 is an input resolution but not a unit resolution, and the resolution of clauses and 7 is a unit resolution but not a input resolution. Despite this, unit resolution and input resolution are equivalent in inferential power in that there is a unit refutation from a set of sentances whenever there is an input refutation and vice versa. 2/24/2019
13
Input Resolution – Horn Clauses
Input refutation is complete for Horn clauses but incomplete in general: Ex: {P,Q} Δ 2. {¬P,Q} Δ 3. {P,¬Q} Δ 4. {¬P,¬Q} Γ To get empty clause, one must use {P} and {¬ P} or {Q} and {¬ Q}! 2/24/2019
14
Linear Resolution At least one of the clauses being resolved at every step is either in the initial data knowledge base or is an ancestor of the other clause (generalization of input resolution) Linear resolution refutation complete for all clauses Ex: {P,Q} {¬P,Q} {P,¬Q} {¬P,¬Q} {Q} {P} {¬Q} { } Near Parent – last resolvent Far Parent – some other clause 2/24/2019
15
Set of Support Resolution
A subset Γ of a set Δ is called a “set of support” for Δ iff Δ - Γ is satisfiable. Initially, Γ = {¬G}. Set of support resolution: at least one of the clauses being resolved at every step is selected from a set of support Γ. The idea: use only a subset of the Δ for new clauses set of support refutation is complete for all clauses often, Γ is chosen to be the clauses derived from the negated goal can be seen as working backwards from the goal 2/24/2019
16
Set of Support resolution example
1. {P, Q} Δ 2. {¬P, R} Δ 3. {¬Q, R} Δ 4. {¬R} Γ (Δ) 5. {¬P} 2, 4 add to Γ 6. {¬Q} 3, 4 add to Γ 7. {Q} 1, 5 add to Γ 8. {P} 1, 6 add to Γ 9. {R} 3, 7 10. { } 6, 7 11. {R} 2, 8 12. { } 5, 8 ¬R is the set of support ‘gamma’ Idea – many conclusions come from resolutions between clauses contained in a portion of the database that is known to be satisfiable. - Eliminate the clauses created by resolving two clauses from the satisfiable KB ‘delta’ 2/24/2019
17
Set of Support resolution Example
1. {P,Q} Δ 2. {¬P,R} Δ 3. {¬Q,R} Δ 4. {¬R} Γ 5. {Q,R} 1, 2 6. {P, R} 1, 3 7. {¬P} 2, 4 8. {¬Q} 3, 4 9. {R} 3, 5 10. {Q} 4, 5 11. {R} 3, 6 12. {P} 4, 6 13. {Q} 1, 7 14. {R} 6, 7 15. {P} 1, 8 16. {R} 5, 8 17. { } 4, 9 18. {R} 3, 10 19. { } 8, 10 20. { } 4, 11 21. {R} 2, 12 22. { } 7, 12 23. {R} 3, 13 24. { } 8, 13 25. { } 4, 14 26. {R} 2, 15 27. { } 7, 15 28. { } 4, 16 29. { } 4, 18 30. { } 4, 21 31. { } 4, 23 32. { } 4, 26 Cascading effect Method of little use if there were no easy way of selecting the set of support. Natural choice when trying to prove conclusions from a consistent database is to choose the negated goal as the set of support. (Database MUST be truly satisfiable) Each resolution will have a connection to the overall goal. More understandable than other resolution strategies Clauses were not included on the previous slide due to empty clause generated earlier 2/24/2019
18
Ordered Resolution Each clause is treated as a linearly ordered set.
Resolution is permitted only on the first literal of each clause. Literals in the conclusion preserve parent clauses' order:“positive parent” clauses followed by “negative parent” clauses. Refutation by ordered resolution is complete for Horn clauses but incomplete in general 2/24/2019
19
Example of Ordered Resolution
1. {P, Q} Δ 2. {¬P, R} Δ 3. {¬Q, R} Δ 4. {¬R} Γ 5. {Q,R} 1, 2 6. {R} 3, 5 7. { } 4, 6 Clauses 1 and 3 do not resolve since they are not the initial literals in their respective clauses This example only takes 3 resolvents while the general resolution takes 24 resolvants through the same level Each clause is treated as a linearly ordered set. Resolution is permitted only on the first literal of each clause. Literals in the conclusion preserve parent clauses' order:“positive parent” clauses followed by “negative parent” clauses. Refutation by ordered resolution is complete for Horn clauses but incomplete in general The conclusion was quickly reached! 2/24/2019
20
Directed Resolution Use of ordered resolution on a knowledge base of “direction” clauses— i.e., Horn clauses with the positive literal either at the beginning or the end of the clause: {ψ} ↔ ψ {¬ φ1,…,¬ φn ,ψ} ↔ φ1,…, φn => ψ {ψ, ¬ φ1,…, ¬ φn } ↔ ψ <= φ1,…, φn {¬ φ1,…,¬ φn } ↔ φ1,…, φn => {¬ φ1,…,¬ φn } ↔ <= φ1,…, φn Clauses written in infix form => Is a forward clause <= Is a backward clause 2/24/2019
21
Forward and Backward Resolution
Forward deduction first prove φ1,…, φn then conclude ψ try to reach the goal starting from the rules Backward deduction to prove ψ, try to work from the goal back to the starting rules φ1,…, φn Directed resolution can be used in forward, backward, or mixed direction. Which is best depends on the branching factor. 2/24/2019
22
Forward Deduction Backward Deduction
1. {¬M(x), P(x)} Δ M(x) => P(x) 2. {M(A)} Δ M(A) 3. {¬P(z)} Δ P(z) => 4. {P(A)} 1, 2 P(A) 5. { } 3, 4 => Backward Deduction 1. {P(x),¬M(x)} Δ P(x) <= M(x) 2. {M(A)} Δ M(A) 3. {¬P(z)} Δ <= P(z) 4. {¬M(z)} 1, 3 <= M(z) 5. { } 2, 4 <= By making some clauses forward and others backward, we can get a mixture of forward and backward resolution. 2/24/2019
23
Mixed Deduction 1.{¬P(x),¬Q(x),R(x)} P(x),Q(x) => R(x)
2.{¬M(x), P(x)} M(x) => P(x) 3.{Q(x),¬N(x)} Q(x) <= N(x) 4.{M(A)} M(A) 5.{M(B)} M(B) 6.{N(B)} N(B) 7.{¬R(z)} R(z) => 8.{P(A)} P(A) 9.{P(B)} P(B) 10.{¬Q(A),R(A)} Q(A) => R(A) 11.{¬Q(B),R(B)} Q(B) => R(B) 12.{¬N(A),R(A)} N(A) => R(A) 13.{¬N(B),R(B)} N(B) => R(B) 14.{R(B)} R(B) 15.{} => Next slide Forward Resolution Example 2/24/2019
24
Forward reasoning: example
1. Insect(x) => Animal(x) 2. Mammal(x) => Animal(x) 3. Ant(x) => Insect(x) 4. Bee(x) => Insect(x) 5. Spider(x) => Insect(x) 6. Lion(x) => Mammal(x) 7. Tiger(x) => Mammal(x) 8. Zebra(x) => Mammal(x) Original KB Derivation 9. Zebra(Zeke) given 10. ¬Animal(Zeke) concl. 11. Mammal(Zeke). 8,9 12. Animal(Zeke) ,11 13 {} ,12 Next Slide Backwards resolution example and the problems presented using the same KB 2/24/2019
25
Backward reasoning: example
1.Animal(x) <= Insect(x) 2.Animal(x) <= Mammal(x) 3.Insect(x) <= Ant(x) 4.Insect(x) <= Bee(x) 5.Insect(x) <= Spider(x) 6.Mammal(x) <= Lion(x) 7.Mammal(x) <= Tiger(x) 8.Mammal(x) <= Zebra(x) Original KB Derivation 9. Zebra(Zeke) given 10. ¬Animal(Zeke) concl. 11. ¬Insect(Zeke) ,10 12. ¬Mammal(Zeke) 2,10 13. ¬Ant(Zeke) 3,11 14. ¬Bee(Zeke) 4,11 15. ¬Spider(Zeke) 5,11 16. ¬Lion(Zeke) 6,12 17. ¬Tiger(Zeke) 7,12 18. ¬Zebra(Zeke) ,12 19. {} ,18 2/24/2019
26
Forward and Backward Resolution
Sometimes forward resolution is more efficient other times backward resolution is more efficient. How do we decide what method to use? Can look at the branching factor of the clauses and try to determine, but this isn’t always accurate. The problem of deciding whether the forward or backward direction (or some mixture) is best is NP-complete. 2/24/2019
27
Logic Programming Based on Horn Clause sentences which limits the expressiveness of the queries Closed world assumption, expresses all beliefs as true or false (FOL includes unknown) Views negation as failure and not as true logical negation Lacks expressiveness in the equality operator since it checks for straight equality rather than logical equality 2/24/2019
28
Sequential Constraint Satisfaction
Similar to ordered resolution Database consists entirely of positive ground literals Query is posed as a conjunction of positive literals Search for variable bindings such that, after substitution into the query, each of the resulting conjuncts is identical to a literal in the database 2/24/2019
29
Sequential Constraint Satisfaction Example
Query: P(x,y) ^ Carpenter(x) ^ Senator(y) KB: P(Art, Jon) P(Ann, Jon) P(Bob, Kim) P(Bea, Kim) P(Cap, Lem) P(Coe, Lem) Carpenter(Ann) Carpenter(Cap) Senator(Jon) Senator(Kim) Query´: {¬P(x,y),¬Carpenter(x),¬Senator(y),Ans(x,y)} 1. {¬P(x,y),¬Carpenter(x),¬Senator(y),Ans(x,y)} 2. {¬Carpenter(Art),¬Senator(Jon),Ans(Art,Jon)} 3. {¬Carpenter(Ann),¬Senator(Jon),Ans(Ann,Jon)} 4. {¬Carpenter(Bob),¬Senator(Kim),Ans(Bob,Kim)} 5. {¬Carpenter(Bea),¬Senator(Kim),Ans(Bea,Kim)} 6. {¬Carpenter(Cap),¬Senator(Lem),Ans(Cap,Lem)} 7. {¬Carpenter(Coe),¬Senator(Lem),Ans(Coe,Lem)} 8. {¬Senator(Jon),Ans(Ann,Jon)} 9. {¬Senator(Lem),Ans(Cap,Lem)} 10. {Ans(Ann,Jon)} Search space is not all that large so order of the literals in each clause does not have a dramatic effect 2/24/2019
30
Consider a Larger KB Consider a census database with the following properties: ||Senator(ν)|| = 100 ||Carpenter(ν)|| ≈ 105 ||P(μ,ν)|| ≈ 108 ||P(μ,γ)|| = 2 ||P(γ,ν)|| ≈ 3 We will assume that the database is complete and nonredundant γ is a ground term! 2/24/2019
31
Cheapest First Heuristic
Consider the following literal orderings: Query1: P(x,y) ^ Carpenter(x) ^ Senator(y) Query2: Senator(y) ^ P(x,y) ^ Carpenter(x) Query1: 108 = Search space for Carpenter(x) ≈105 = Search space for Senator(y) Query2: 100 * 2 = Search space of 200 for Carpenter(x)!! This example suggests the useful heuristic of always selecting the cheapest literal first. But is this always the case? 2/24/2019
32
Cheapest First Counterexample
P(x) is smallest so start with that R(x,y) is next smallest if x is known Finish with Q(y) by default 1000*100 = 100,000 - Search space for Q(y) ||P(ν)|| = 1000 ||Q(ν)|| = 2000 ||R(μ,ν)|| = 100,000 ||R(γ,ν)|| = 100 ||R(ν,γ)|| = 10 However… Try starting with Q(y), followed by R(x,y) and then P(x). 2000 * 10 = 20,000 – Search space for P(x) 2/24/2019
33
Finding the Optimal Ordering
Enumerate all possible combinations - Obviously bad with n! possible orderings Adjacency theorem 2/24/2019
34
Adjacency Theorem Given a set of literals l1,…,ln, lij is defined to be the literal obtained into li ground terms for the variables in l1,…,lj. Query: P(x) ^ Q(x,y) ^ R(x,y) P(x)0 → P(x) Q(x,y)0 → Q(x,y) Q(x,y)1 → Q(γ,y) R(x,y)0 → Q(x,y) R(x,y)1 → R(γ,y) R(x,y)2 → R(γ1,γ2) γ is the ground term 2/24/2019
35
Adjacency Theorem Theorem:
If l1,…,ln is an optimal literal ordering, then for all i between 1 and n – 1. Corollary1: The most expensive conjunct should never be done first. Corollary2: Given a conjunct sequence of length two, the less expensive conjunct should always be done first. 2/24/2019
36
Adjacency Theorem Example: Constraint: ||P(ν)|| = 1000 ||Q(ν)|| = 2000
So… First literal can be: P(x) or Q(y) 2/24/2019
37
Adjacency Theorem Example: Constraint: ||P(ν)|| = 1000 ||Q(ν)|| = 2000
So… First two literals can be: P(x), R(x,y) or Q(x), R(x,y) 2/24/2019
38
Adjacency Theorem ||P(x), Q(y), R(x,y)|| = 2,000,000
||P(x), R(x,y), Q(y)|| = 100,000 ||Q(y), P(x), R(x,y)|| = 2,000,000 ||Q(y), R(x,y), P(x)|| = 20,000 ||R(x,y), P(x), Q(y)|| = 100,000 ||R(x,y), Q(y), P(x)|| = 100,000 All possible orderings: By default, the last literal will be filled in: P(x), R(x,y), Q(y) Q(y), R(x,y), P(x) Adjacency Theorem: 2 orderings searched In general: 3! = 6 orderings searched In general: n! n = number of literals G = recursive function d = the number of remaining literals that cannot appear as the next literal because of the adjacency restriction. if n = d if n = 1, d = 0 otherwise 2/24/2019
39
Adjacency Theorem Figures
n G(n,0) n! 1 1 1 2 1 2 3 2 6 4 5 24 ,320 ,880 10 50,521 3,628,800 2/24/2019
40
Rule-based systems A variety of theorem provers and rule-based systems have been programmed. Most restrict the type of rules and clauses that can be input to the system. Most are refutation-based. Most provide some control over search strategy to be used: backward, forward, cut entire branches of the search tree. The most successful to date: PROLOG PROgrammation LOGique = Logic Programming 2/24/2019
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.