Dovetail Killer? Implementing Jonathan’s New Idea Tarek Sherif.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Finding Optimal Solution of 15 puzzle B NTUCSIE Kai-Yung Chiang.
Heuristic Functions By Peter Lane
Informed search algorithms
1 Dual lookups in Pattern Databases Ariel Felner, Ben-Gurion Univ. Israel Uzi Zahavi, Bar-Ilan Univ. Israel Jonathan Schaeffer, Univ. of Alberta, Canada.
Review: Search problem formulation
Heuristic Search techniques
Hash Tables CS 310 – Professor Roch Weiss Chapter 20 All figures marked with a chapter and section number are copyrighted © 2006 by Pearson Addison-Wesley.
Inconsistent Heuristics
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
Mahgul Gulzai Moomal Umer Rabail Hafeez
Planning under Uncertainty
Hit or Miss ? !!!.  Cache RAM is high-speed memory (usually SRAM).  The Cache stores frequently requested data.  If the CPU needs data, it will check.
Review: Search problem formulation
CMPUT 651: Front to Front Perimeter Search & Pattern Databases Johnny Huynh.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Sublinear time algorithms Ronitt Rubinfeld Blavatnik School of Computer Science Tel Aviv University TexPoint fonts used in EMF. Read the TexPoint manual.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Compressing a Single PDB Presented by: Danielle Sauer CMPUT 652 Project December 1, 2004.
Pattern Databases Robert Holte University of Alberta November 6, 2002.
Combining Front-to-End Perimeter Search and Pattern Databases CMPUT 652 Eddie Rafols.
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Non-Conservative Cost Bound Increases in IDA* Doug Demyen.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
CS212: DATA STRUCTURES Lecture 10:Hashing 1. Outline 2  Map Abstract Data type  Map Abstract Data type methods  What is hash  Hash tables  Bucket.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
AVL Trees Amanuel Lemma CS252 Algoithms Dec. 14, 2000.
Researchers: Preet Bola Mike Earnest Kevin Varela-O’Hara Han Zou Advisor: Walter Rusin Data Storage Networks.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Li Wang Haorui Wu University of South Carolina 04/02/2015 A* with Pattern Databases.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
CP Summer School Modelling for Constraint Programming Barbara Smith 4. Combining Viewpoints, Modelling Advice.
Adversarial Games. Two Flavors  Perfect Information –everything that can be known is known –Chess, Othello  Imperfect Information –Player’s have each.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
HANGMAN OPTIMIZATION Kyle Anderson, Sean Barton and Brandyn Deffinbaugh.
1 Distributed Vertex Coloring. 2 Vertex Coloring: each vertex is assigned a color.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search and Exploration
Computer Science cpsc322, Lecture 14
Games with Chance Other Search Algorithms
Games with Chance Other Search Algorithms
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Finding Heuristics Using Abstraction
Example Fill in the grid so that every row, column and box contains each of the numbers 1 to 9:
Computer Science cpsc322, Lecture 14
Heuristics Local Search
Runtime evaluation of algorithms
Artificial Intelligence
Heuristics Local Search
Lecture 9 Administration Heuristic search, continued
B-Trees.
Games with Chance Other Search Algorithms
CSE 373: Data Structures and Algorithms
Presentation transcript:

Dovetail Killer? Implementing Jonathan’s New Idea Tarek Sherif

The Algorithm Given two or more PDBs the algorithm simply passes through each pair of elements, in order. The hybrid takes: The highest of the two values. The highest of the two values. The original PDB from which the chosen value came. The original PDB from which the chosen value came.

The Algorithm The search requires: The hybrid PDB The hybrid PDB The domain abstractions from the original PDBs. The domain abstractions from the original PDBs. To access the hybrid for state N: N is hashed into the hybrid using all given domain abstractions. N is hashed into the hybrid using all given domain abstractions.

The Algorithm If the domain abstraction used to hash to a value is that of the PDB the value came from, take that value. Otherwise assume it is 0. If the domain abstraction used to hash to a value is that of the PDB the value came from, take that value. Otherwise assume it is 0. h(N) = max of all taken values. h(N) = max of all taken values.

Background IDA* on the 8-puzzle used to test the algorithm. Propagating h-values using Mero’s algorithm. Used only at the fringe of the search, since that is where pruning might occur. Used only at the fringe of the search, since that is where pruning might occur.

Background The Perfect Hash. Assuming only one token in domain abstraction. Assuming only one token in domain abstraction. p0 = position of blank. p0 = position of blank. p1 = position of first constant if it is less than position of blank, position -1 otherwise. p1 = position of first constant if it is less than position of blank, position -1 otherwise. p2 = position of the second constant if it is less than positions of blank of first constant, position -1 if it is higher than just one, position -2 if it is higher than both. p2 = position of the second constant if it is less than positions of blank of first constant, position -1 if it is higher than just one, position -2 if it is higher than both. Etc. Etc.

Background Hash = p0 + 9*p1 + 72*p2 … E.g. Domain Abstraction: x 4 x x x x State: x 0 x x 1 x 4 2 x p0 = 1 p1 = 4 – 1 = 3 (because position of 1 is higher than 0) p2 = 7 – 2 = 5 (because position of 2 is higher than 1 and 0) p3 = 6 – 2 = 4 (because position of 4 is higher than 1 and 0) Hash = 1 + (9 * 3) + (72 * 5) + (504 * 4) = 2404

Background Hash ValueDomain 1Domain x x x x x 0 x x x x x x x x x x 0 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Experiments Four domain abstractions with four constants each: Φ1: x x x x Φ1: x x x x Φ2: 0 x x x x Φ2: 0 x x x x Φ3: 0 x 2 x 4 x 6 x 8 Φ3: 0 x 2 x 4 x 6 x 8 Φ4: 0 1 x 3 x 5 x 7 x Φ4: 0 1 x 3 x 5 x 7 x Generated 100 random start states and saved them to file.

Results φ Average Nodes Generated Max Nodes/ Iteration Min Nodes/ Iteration Average no. of lookups on final it

Hybrids Three hybrids were created: Three hybrids were created: Φ1 + Φ2 Φ1 + Φ2 Φ1 + Φ3 Φ1 + Φ3 Φ3 + Φ4 Φ3 + Φ4

Drum roll… Hybrid Average Nodes Generated Max Nodes/ Iteration Min Nodes/ Iteration Average Successful Lookups Average Failed Lookups

Wow… that sucks. Why? Possibilities: Possibilities: Are the values being lost randomly? Are the values being lost randomly? Likely to be closer to the goal since we’re removing the low values. Likely to be closer to the goal since we’re removing the low values. This may lead to the algorithm working well at first and then getting completely lost in areas nearer to the goal. This may lead to the algorithm working well at first and then getting completely lost in areas nearer to the goal. The hashing function: The hashing function: May have some patterns that cause it to work poorly with this algorithm. May have some patterns that cause it to work poorly with this algorithm.

Wow… that sucks. IDA* can’t do much backchecking, so areas with a lot of lost h values could be very problematic. IDA* can’t do much backchecking, so areas with a lot of lost h values could be very problematic.

Possible Solutions Try with A*. Rotate the PDBs before combining them to attempt to “line them up.” e.g. line up the goal states. e.g. line up the goal states. A weird one… take the minimum of the two PDB entries into the hybrid. No failed lookups (min will always be admissible for both). No failed lookups (min will always be admissible for both).

Possible Solutions The search will still take the max of these mins. The search will still take the max of these mins.

Min Hybrid: Results Hybrid Average Nodes Generated Max Nodes/ Iteration Min Nodes/ Iteration Average no. of lookups on final it

Things to come… Try it with A*. Lining up the PDBs. Possibly trying some combinations of Min and Max Hybrids. Combining more than 2 PDBs. Checking where Max Hybrid is having the most problems. More PDBs, more hybrids, etc.