FOL resolution strategies Tuomas Sandholm Carnegie Mellon University Computer Science Department [Finish reading Russell & Norvig Chapter 9 if you haven’t.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
January 26, 2003AI: Chapter 3: Solving Problems by Searching 1 Artificial Intelligence Chapter 3: Solving Problems by Searching Michael Scherger Department.
Artificial Intelligence Problem Solving Eriq Muhammad Adams
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Lets remember about Goal formulation, Problem formulation and Types of Problem. OBJECTIVE OF TODAY’S LECTURE Today we will discus how to find a solution.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Artificial Intelligence (CS 461D)
Feng Zhiyong Tianjin University Fall  datatype PROBLEM ◦ components: INITIAL-STATE, OPERATORS, GOAL- TEST, PATH-COST-FUNCTION  Measuring problem-solving.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Chapter 3]
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
1 Lecture 3 Uninformed Search
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Solving Problems by Searching CPS Outline Problem-solving agents Example problems Basic search algorithms.
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department Read Russell & Norvig Sections (Also read Chapters 1 and 2 if.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Vilalta&Eick:Uninformed Search Problem Solving By Searching Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States/Looping.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem solving by search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Pengantar Kecerdasan Buatan
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
Solving problems by searching A I C h a p t e r 3.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Search Part I Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States Partial Information Summary.
Search I Tuomas Sandholm Read Russell & Norvig Sections (Also read Chapters 1 and 2 if you haven’t already.) Next time we’ll cover topics related.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Uninformed Search Chapter 3.4.
Problem Solving by Searching
Search Tuomas Sandholm Read Russell & Norvig Sections
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
Solving problems by searching
Search Tuomas Sandholm Read Russell & Norvig Sections
Problem Solving and Searching
What to do when you don’t know anything know nothing
Problem Solving and Searching
Solving problems by searching
Solving problems by searching
Presentation transcript:

FOL resolution strategies Tuomas Sandholm Carnegie Mellon University Computer Science Department [Finish reading Russell & Norvig Chapter 9 if you haven’t yet]

Propositional logic is too weak a representational language - Too many propositions to handle, and truth table has 2 n rows. E.g. in the wumpus world, the simple rule “don’t go forward if the wumpus is in front of you” requires 64 rules ( 16 squares x 4 orientations for agent) - Hard to deal with change. Propositions might be true at times but not at others. Need a proposition P i t for each time step because one should not always forget what held in the past (e.g. where the agent came from) - don’t know # time steps - need time-dependent versions of rules - Hard to identify “individuals”, e.g. Mary, 3 - Cannot directly talk about properties of individuals or relations between individuals, e.g. Tall(bill) - Generalizations, patterns cannot easily be represented “all triangles have 3 sides.”

Resolution in FOL via search Resolution can be viewed as the bottom-up construction (using search) of a proof tree Search strategy prescribes –which pair of sentences to pick for resolution at each point, and –which clause to unify from those sentences

Resolution strategies Strategy is complete if it is guaranteed to find the empty clause whenever it is entailed Level 0 clauses are the original ones. Level k clauses are the resolvents of two clauses, one of which is from level k-1 and the other from an earlier level Breadth-first –Compute all level 1 clauses, then level 2 clauses… –Complete, but inefficient Set-of-support –At least one parent clause must be from the negation of the goal or one of the descendants of such a goal clause –Complete (assuming all possible set-of-support clauses are derived)

Resolution strategies… Unit resolution –At least one parent clause must be a unit clause, i.e., contain a single literal –Not complete (but complete for Horn clause KBs) –Unit preference speeds up resolution drastically in practice Input resolution –At least one parent from the set of original clauses (axioms and negation of goal) –Not complete (but complete for Horn clause KBs) Linear resolution (generalization of input resolution) –Allow P and Q to be resolved together if P is in the original KB or P is an ancestor of Q in the proof tree –Complete for FOL

Subsumption Eliminate more specific sentences than existing ones E.g., if P(x) is in KB, then do not add P(A) or P(A) V Q(B)

Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Sections (Also read Chapters 1 and 2 if you haven’t already.)]

Search I Goal-based agent (problem solving agent) Goal formulation (from preferences). Romania example, (Arad  Bucharest) Problem formulation: deciding what actions & state to consider. E.g. not “move leg 2 degrees right.” No map vs. Map physical deliberative search search

Search I “Formulate, Search, Execute” (sometimes interleave search & execution) For now we assume full observability, i.e., known state known effects of actions Data type problem Initial state (perhaps an abstract characterization) vs. partial observability (set) Operators Goal-test (maybe many goals) Path-cost-function Knowledge representation Mutilated chess board example Can make huge speed difference in integer programming, e.g., edge versus cycle formulation in kidney exchange

Search I Example problems demonstrated in terms of the problem definition. I. 8-puzzle (general class is NP-complete) How to model operators? (moving tiles vs. blank) Path cost = 1

Search I II. 8-queens (actually, even the general class with n queens happens to have an efficient solution, so search would not be the method of choice) path cost = 0: in this application we are interested in a node, not a path Incremental formulation: (constructive search) States: any arrangement of 0 to 8 queens on board Ops: add a queen to any square # sequences = 64 8 Improved incremental formulation: States: any arrangement of 0 to 8 queens on board with none attacked Ops: place a queen in the left-most empty column s.t. it is not attacked by any other queen # sequences = 2057 Complete State formulation: (iterative improvement) States: arrangement of 8 queens, 1 in each column Ops: move any attacked queen to another square in the same column Almost a solution to the 8-queen problem:

Search I III.Rubik’ cube ~ states IV. Crypt arithmetic FORTY TEN SIXTY V.Real world problems 1. Routing (robots, vehicles, salesman) 2. Scheduling & sequencing 3. Layout (VLSI, Advertisement, Mobile phone link stations) 4. Winner determination in combinatorial auctions 5. Which combination of cycles to accept in kidney exchange? …

Data type node State Parent-node Operator Depth Path-cost Fringe = frontier = open list (as queue)

Goodness of a search strategy Completeness Time complexity Space complexity Optimality of the solution found (path cost = domain cost) Total cost = domain cost + search cost search cost

Uninformed vs. informed search Can only distinguish goal states from non-goal state

Breadth-First Search function BREADTH-FIRST-SEARCH (problem) returns a solution or failure return GENERAL-SEARCH (problem, ENQUEUE-AT-END) Breadth-first search tree after 0,1,2 and 3 node expansions

Breadth-First Search … Max 1 + b + b 2 + … + b d nodes (d is the depth of the shallowest goal) - Complete - Exponential time & memory O(b d ) - Finds optimum if path-cost is a non-decreasing function of the depth of the node.

Uniform-Cost Search Insert nodes onto open list in ascending order of g(h). Finds optimum if the cost of a path never decreases as we go along the path. g(SUCCESSORS(n))  g(n) <= Operator costs  0 If this does not hold, nothing but an exhaustive search will find the optimal solution. G inserted into open list although it is a goal state. Otherwise cheapest path to a goal may not be found.

Depth-First Search function DEPTH-FIRST-SEARCH (problem) returns a solution or failure GENERAL-SEARCH (problem, ENQUEUE-AT-FRONT) Time O(b m ) (m is the max depth in the space) Space O(bm) ! Not complete (m may be  ) E.g. grid search in one direction Not optimal Alternatively can use a recursive implementation.

Depth-Limited Search - Depth limit in the algorithm, or - Operators that incorporate a depth limit L = depth limit Complete if L  d (d is the depth of the shallowest goal) Not optimal (even if one continues the search after the first solution has been found, because an optimal solution may not be within the depth limit L) O(b L ) time O(bL) space Diameter of a search space

Iterative Deepening Search

Complete, optimal, O(bd) space What about run time? Breadth first search: 1 + b + b 2 + … + b d-1 + b d E.g. b=10, d=5: ,000+10, ,000 = 111,111 Iterative deepening search: (d+1)*1 + (d)*b + (d-1)*b 2 + … + 2b d-1 + 1b d E.g , ,000 = 123,456 In fact, run time is asymptotically optimal: O(b d ). We prove this next…

If branching factor is large, most of the work is done at the deepest level of search, so iterative deepening does not cost much relatively to breadth-first search. Conclusion: Iterative deepening is preferred when search space is large and depth of (optimal) solution is unknown Not preferred if branching factor is tiny (near 1) Iterative Deepening Search…

Bi-Directional Search Time O(b d/2 )

Bi-Directional Search … Need to have operators that calculate predecessors. What if there are multiple goals? If there is an explicit list of goal states, then we can apply a predecessor function to the state set just as we apply the successors function in multiple-state forward search. If there is only a description of the goal set, it MAY be possible to figure out the possible descriptions of “sets of states that would generate the goal set”. Efficient way to check when searches meet: hash table step issue if only one side stored in the table Decide what kind of search (e.g. breadth-first) to use in each half. Optimal, complete, O(b d/2 ) time. O(b d/2 ) space (even with iterative deepening) because the nodes of at least one of the searches have to be stored to check matches

Time, Space, Optimal, Complete? b = branching factor d = depth of shallowest goal state m = depth of the search space l = depth limit of the algorithm

More effective & more computational overhead With loops, the search tree may even become infinite Avoiding repeated states