Combining Front-to-End Perimeter Search and Pattern Databases CMPUT 652 Eddie Rafols.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

1 Dual lookups in Pattern Databases Ariel Felner, Ben-Gurion Univ. Israel Uzi Zahavi, Bar-Ilan Univ. Israel Jonathan Schaeffer, Univ. of Alberta, Canada.
Review: Search problem formulation
Informed Search Algorithms
Heuristic Search techniques
Informed search strategies
AI Pathfinding Representing the Search Space
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Inconsistent Heuristics
Traveling Salesperson Problem
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
CSC 423 ARTIFICIAL INTELLIGENCE
Search in AI.
Search Strategies.  Tries – for word searchers, spell checking, spelling corrections  Digital Search Trees – for searching for frequent keys (in text,
Section 7.4: Closures of Relations Let R be a relation on a set A. We have talked about 6 properties that a relation on a set may or may not possess: reflexive,
Integers. Integer Storage Since Binary consists only of 0s and 1s, we can’t use a negative sign ( - ) for integers. Instead, the Most Significant Bit.
Slide 1 Search: Advanced Topics Jim Little UBC CS 322 – Search 5 September 22, 2014 Textbook § 3.6.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
Review: Search problem formulation
CMPUT 651: Front to Front Perimeter Search & Pattern Databases Johnny Huynh.
Artificial Intelligence
Fun with Star Abstraction Or: Hierarchical A*, Refinement and the Gray Area Between Them.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Research Related to Real-Time Strategy Games Robert Holte November 8, 2002.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Compressing a Single PDB Presented by: Danielle Sauer CMPUT 652 Project December 1, 2004.
Pattern Databases Robert Holte University of Alberta November 6, 2002.
Dovetail Killer? Implementing Jonathan’s New Idea Tarek Sherif.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Non-Conservative Cost Bound Increases in IDA* Doug Demyen.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Problem Solving by Searching Search Methods : informed (Heuristic) search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Heuristic Search Andrea Danyluk September 16, 2013.
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Heuristic Search Foundations of Artificial Intelligence.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Adversarial Search 2 (Game Playing)
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Review: Tree search Initialize the frontier using the starting state
Abstraction Transformation & Heuristics
Artificial Intelligence Problem solving by searching CSC 361
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Artificial Intelligence
Heuristics Local Search
CSE 473 University of Washington
Artificial Intelligence
Presentation transcript:

Combining Front-to-End Perimeter Search and Pattern Databases CMPUT 652 Eddie Rafols

Motivation From Russell, 1992

Allocating Memory More memory for Open/Closed Lists Caching Perimeter Search Pattern Databases

Allocating Memory More memory for Open/Closed Lists Caching Perimeter Search Pattern Databases

Perimeter Search Generate a perimeter around the Goal Any path to Goal must pass through a perimeter node B i We know the optimal path from B i to Goal Stopping condition for IDA* is now:  If A is a perimeter node and g(Start,A)+g*(A,Goal)  Bound

Perimeter Search Traditional Search O(b d ) Perimeter Search O(b r +b d-r ) Large potential savings!

Perimeter Search Can be used to improve our heuristic Kaindl & Kainz, 1997  Add Method  Max Method

Pattern Databases Provide us with a consistent estimate of the distance from any given state to the goal* *This point will become relevant in a few slides

Approach Generate a pattern database to provide a heuristic Use Kaindl & Kainz’s techniques to improve on heuristic values (Add,Max) Determine how perimeter search and PDBs can most effectively be combined via empirical testing

A Digression… Among other things, the Max method requires:  h(Start, A), where A is a search node  h(Start, B i ), where B i is a perimeter node

A Digression… Among other things, the Max method requires:  h(Start, A), where A is a search node  h(Start, B i ), where B i is a perimeter node We are not explicitly given this information in a PDB.

A Digression... Recall: Alternate PDB lookups If we are dealing with a state space where distances are symmetric and ‘tile’- independant, we can use this technique

A Digression... ex. Pancake Problem

A Digression... However, this technique may provide inconsistent heuristics ϕ1ϕ1 ϕ2ϕ2 ϕ 1  ϕ 2

A Digression... Kaindl’s proof of the max method relies on a consistent heuristic...but we can still use our pattern database We just have to use it correctly

A Digression... Distances are symmetric, therefore  h*(Start, A) = h*(A, Start)  h*(Start, B i ) = h*(B i, Start) We can map the Start state to the Goal state In this case, when we do alternate lookups on A and B i, we are using the same mapping! Our heuristic is now consistent!

A Digression... ϕ ϕ ϕ

A Third Heuristic? Since we have the mechanisms in place, why not use alternate lookup to get h’(Goal, A)? Turns out that this is the exact same lookup as h(A,Goal)

Combining the Heuristics The Add method lets us adjust our normal PDB estimate:  h’ 1 (A,goal) = h(A,goal) +  The Max method gives us another heuristic:  h’ 2 (A,goal)=min i (h(B i,start) +g*(B i,goal))-h(A, start) Our final heuristic:  H(A,goal)=max(h’ 1 (A,goal),h’ 2 (A,goal))

Hypotheses  All memory used on perimeter, expect poor performance  All memory used on PDB, expect good performance, but not the best possible  Small perimeters combined with large PDBs should outperform large perimeters with small PDBs

Method Do a binary search in the memory space.  Test ‘pure’ perimeter search and ‘pure’ PDB search  Give half the memory to the winner, compare whether a PDB or a perimeter would be a more effective use of the remaining memory  Repeat until perimeter search becomes a more effective use of memory

Results

Discussion Discouraging results Using “extra” space for a PDB seems to provide better results across the board

Discussion Adding a perimeter does not appear to have a significant effect

Discussion Empirically, the Add method is always returning  = 0  A directed strategy for perimeter creation is likely needed for this method to have any effect

Discussion Further experiments show that given a fixed PDB, as perimeters increase in size, there is a negligible performance increase

An Idea Is the heuristic effectively causing paths to perimeter nodes to be pruned? This means that performance is only being improved along a narrow search path Can we generate the perimeter ‘intelligently’ to make it more useful?

Questions?