CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed Search Algorithms
Problem solving with graph search
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
Informed search algorithms
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Recent Progress in the Design and Analysis of Admissible Heuristic Functions Richard E. Korf Computer Science Department University of California, Los.
CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.
Search in AI.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Chapter 5 Outline Formal definition of CSP CSP Examples
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Backtracking.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Informed Search Idea: be smart about what paths to try.
Classical Planning Chapter 10.
Informed (Heuristic) Search
Li Wang Haorui Wu University of South Carolina 04/02/2015 A* with Pattern Databases.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Criticizing solutions to Relaxed Models Yields Powerful Admissible Heuristics Sean Doherty Mingxiang Zhu.
CSE 473: Artificial Intelligence Spring 2012
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Heuristic Search Andrea Danyluk September 16, 2013.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Introduction to Artificial Intelligence Class 1 Planning & Search Henry Kautz Winter 2007.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
Searching for Solutions
Warm-up: Tree Search Solution
Planning I: Total Order Planners Sections
Heuristics & Constraint Satisfaction CSE 573 University of Washington.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Functions.
Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.
1 A* search Outline In this topic, we will look at the A* search algorithm: –It solves the single-source shortest path problem –Restricted to physical.
Review: Tree search Initialize the frontier using the starting state
Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel
CAP 5636 – Advanced Artificial Intelligence
Heuristics Local Search
Heuristics Local Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew Moore, & UW Faculty Dan Weld

Todo  Trim and clean up planning intro slides  Add more details  Delete preconditions  Delete negative effects  Delete subset of literals 2

Recap: Search Problem  States  configurations of the world  Successor function:  function from states to lists of (state, action, cost) triples  Start state  Goal test

General Graph Search Paradigm 4 function tree-search(root-node) fringe  successors(root-node) explored  empty while ( notempty(fringe) ) {node  remove-first(fringe) state  state(node) if goal-test(state) return solution(node) explored  explored  {node} fringe  fringe  (successors(node) - explored) } return failure end tree-search Fringe = priority queue, ranked by heuristic Often: f(x) = g(x) + h(x)

5 Which Algorithm? Uniform cost search

Which Algorithm? A* using Manhattan

Which Algorithm? Best-first search using Manhattan

Heuristics It’s what makes search actually work

Admissable Heuristics  f(x) = g(x) + h(x)  g: cost so far  h: underestimate of remaining costs 9 © Daniel S. Weld Where do heuristics come from?

Relaxed Problems  Derive admissible heuristic from exact cost of a solution to a relaxed version of problem  For transportation planning, relax requirement that car has to stay on road  Euclidean dist  For blocks world, distance = # move operations heuristic = number of misplaced blocks  What is relaxed problem? 10 # out of place = 2, true distance to goal = 3 Cost of optimal soln to relaxed problem  cost of optimal soln for real problem

What’s being relaxed?

Example: Pancake Problem Cost: Number of pancakes flipped Goal: Pancakes in size order Action: Flip over the top n pancakes

Example: Pancake Problem

State space graph with costs as weights

Example: Heuristic Function Heuristic: h(x) = the largest pancake that is still out of place What is being relaxed?

Counterfeit Coin Problem  Twelve coins  One is counterfeit: maybe heavier, maybe light  Objective:  Which is phony & is it heavier or lighter?  Max three weighings 16

Coins  State = coin possibilities  Action = weighing two subsets of coins  Heuristic?  What is being relaxed? 17

Traveling Salesman Problem 18 What can be relaxed? Path = 1)Graph 2)Degree 2 (except ends, degree 1) 3)Connected Kruskal’s Algo: (Greedily add cheapest useful edges)

Traveling Salesman Problem 19 What can be relaxed? Relax degree constraint Assume can teleport to past nodes on path  Minimum spanning tree Kruskal’s Algorithm: O(n 2 ) (Greedily add cheapest useful edges)

Traveling Salesman Problem 20 What can be relaxed? Relax connected constraint  Cheapest degree 2 graph Optimal assignment O(n 3 )

Automated Generation of Relaxed Problems  Need to reason about search problems  Represent search problems in formal language 21

Planning I have a plan - a plan that cannot possibly fail. - Inspector Clousseau

Classical Planning  Given  a logical description of the initial situation,  a logical description of the goal conditions, and  a logical description of a set of possible actions,  Find  a sequence of actions (a plan of actions) that brings us from the initial situation to a situation in which the goal conditions hold. © D. Weld, D. Fox 23

Example: BlocksWorld © Daniel S. Weld 24 A C B C B A

Planning Input: State Variables/Propositions Types: block --- a, b, c (on-table a) (on-table b) (on-table c) (clear a) (clear b) (clear c) (arm-empty) (holding a) (holding b) (holding c) (on a b) (on a c) (on b a) (on b c) (on c a) (on c b) © D. Weld, D. Fox 25 No. of state variables =16 No. of states = 2 16 No. of reachable states = ? on-table(a)

Planning Input: Actions  pickup a b, pickup a c, …  place a b, place a c, …  pickup-table a, pickup-table b, …  place-table a, place-table b, … © D. Weld, D. Fox 26 Total: = 18 “ground” actions Total: 4 action schemata

Planning Input: Actions (contd)  :action pickup ?b1 ?b2 :precondition (on ?b1 ?b2) (clear ?b1) (arm-empty) :effect (holding ?b1) (not (on ?b1 ?b2)) (clear ?b2) (not (arm-empty)) © D. Weld, D. Fox 27 :action pickup-table ?b :precondition (on-table ?b) (clear ?b) (arm-empty) :effect (holding ?b) (not (on-table ?b)) (not (arm-empty))

Planning Input: Initial State (on-table a) (on-table b) (arm-empty) (clear c) (clear b) (on c a) All other propositions false  not mentioned  assumed false  “Closed world assumption” © D. Weld, D. Fox 28 A C B

Planning Input: Goal (on-table c) AND (on b c) AND (on a b) Is this a state? In planning a goal is a set of states Like the goal test in problem solving search But specified declaratively (in logic) rather than with code © D. Weld, D. Fox 29 C B A D

Specifying a Planning Problem  Description of initial state of world  Set of propositions  Description of goal:  E.g., Logical conjunction  Any world satisfying conjunction is a goal  Description of available actions © D. Weld, D. Fox 30

Forward State-Space Search  Initial state: set of positive ground literals  CWA: literals not appearing are false  Actions:  applicable if preconditions satisfied  add positive effect literals  remove negative effect literals  Goal test: does state logically satisfy goal?  Step cost: typically 1 © D. Weld, D. Fox 31

Heuristics for State-Space Search Count number of false goal propositions in current state Admissible? NO Subgoal independence assumption: –Cost of solving conjunction is sum of cost of solving each subgoal independently –Optimistic: ignores negative interactions –Pessimistic: ignores redundancy –Admissible? No –Can you make this admissible? © D. Weld, D. Fox 32

Heuristics for State Space Search (contd)  Delete all preconditions from actions, solve easy relaxed problem, use length Admissible? YES CSE :action pickup-table ?b :precondition (and (on-table ?b) (clear ?b) (arm-empty)) :effect (and (holding ?b) (not (on-table ?b)) (not (arm-empty)))

Heuristics for eight puzzle What can we relax? startgoal 

Importance of Heuristics h1 = number of tiles in wrong place 35 D IDS A*(h1)A*(h2)

Importance of Heuristics h1 = number of tiles in wrong place h2 =  distances of tiles from correct loc 36 D IDS A*(h1)A*(h2) Decrease effective branching factor

Combining Admissible Heuristics  Can always take max  Could add several heuristic values  Doesn’t preserve admissibility in general 37

Performance of IDA* on 15 Puzzle  Random 15 puzzle instances were first solved optimally using IDA* with Manhattan distance heuristic (Korf, 1985).  Optimal solution lengths average 53 moves.  400 million nodes generated on average.  Average solution time is about 50 seconds on current machines.

Limitation of Manhattan Distance  Solving a 24-Puzzle instance,  IDA* with Manhattan distance …  65,000 years on average.  Assumes that each tile moves independently  In fact, tiles interfere with each other.  Accounting for these interactions is the key to more accurate heuristic functions.

Example: Linear Conflict 1331 Manhattan distance is 2+2=4 moves

Example: Linear Conflict 1331 Manhattan distance is 2+2=4 moves

Example: Linear Conflict Manhattan distance is 2+2=4 moves

Example: Linear Conflict Manhattan distance is 2+2=4 moves

Example: Linear Conflict Manhattan distance is 2+2=4 moves

Example: Linear Conflict 1331 Manhattan distance is 2+2=4 moves

Example: Linear Conflict 1331 Manhattan distance is 2+2=4 moves, but linear conflict adds 2 additional moves.

Linear Conflict Heuristic  Hansson, Mayer, and Yung, 1991  Given two tiles in their goal row,  but reversed in position,  additional vertical moves can be added to Manhattan distance.  Still not accurate enough to solve 24-Puzzle  We can generalize this idea further.

Pattern Database Heuristics  Culberson and Schaeffer, 1996  A pattern database is a complete set of such positions, with associated number of moves.  e.g. a 7-tile pattern database for the Fifteen Puzzle contains 519 million entries.

Heuristics from Pattern Databases moves is a lower bound on the total number of moves needed to solve this particular state.

Precomputing Pattern Databases  Entire database is computed with one backward breadth-first search from goal.  All non-pattern tiles are indistinguishable,  But all tile moves are counted.  The first time each state is encountered, the total number of moves made so far is stored.  Once computed, the same table is used for all problems with the same goal state.

Combining Multiple Databases Overall heuristic is maximum of 31 moves 31 moves needed to solve red tiles 22 moves need to solve blue tiles

Drawbacks of Standard Pattern DBs  Since we can only take max  Diminishing returns on additional DBs  Would like to be able to add values 52 © Daniel S. Weld Adapted from Richard Korf presentation

Additive Pattern Databases  Culberson and Schaeffer counted all moves needed to correctly position the pattern tiles.  In contrast, we could count only moves of the pattern tiles, ignoring non-pattern moves.  If no tile belongs to more than one pattern, then we can add their heuristic values.  Manhattan distance is a special case of this, where each pattern contains a single tile.

Example Additive Databases The 7-tile database contains 58 million entries. The 8-tile database contains 519 million entries.

Computing the Heuristic Overall heuristic is sum, or 20+25=45 moves 20 moves needed to solve red tiles 25 moves needed to solve blue tiles

Performance  15 Puzzle: 2000x speedup vs Manhattan dist  IDA* with the two DBs shown previously solves 15 Puzzles optimally in 30 milliseconds  24 Puzzle: 12 million x speedup vs Manhattan  IDA* can solve random instances in 2 days.  Requires 4 DBs as shown  Each DB has 128 million entries  Without PDBs: 65,000 years 56 © Daniel S. Weld Adapted from Richard Korf presentation