Non-Conservative Cost Bound Increases in IDA* Doug Demyen.

Slides:



Advertisements
Similar presentations
Artificial Intelligence Presentation
Advertisements

An Introduction to Artificial Intelligence
Inconsistent Heuristics
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.
ICS-171:Notes 4: 1 Notes 4: Optimal Search ICS 171 Summer 1999.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Route Planning Branch & Bound CIS548 November 15, 2006.
Slide 1 Search: Advanced Topics Jim Little UBC CS 322 – Search 5 September 22, 2014 Textbook § 3.6.
State Space Search Algorithms CSE 472 Introduction to Artificial Intelligence Autumn 2003.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
Nonholonomic Multibody Mobile Robots: Controllability and Motion Planning in the Presence of Obstacles (1991) Jerome Barraquand Jean-Claude Latombe.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 22, 2010.
Review: Search problem formulation
Branch and Bound Similar to backtracking in generating a search tree and looking for one or more solutions Different in that the “objective” is constrained.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
5-Nov-2003 Heuristic Search Techniques What do you do when the search space is very large or infinite? We’ll study three more AI search algorithms: Backtracking.
Dovetail Killer? Implementing Jonathan’s New Idea Tarek Sherif.
Combining Front-to-End Perimeter Search and Pattern Databases CMPUT 652 Eddie Rafols.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
1 Lecture 3 Uninformed Search
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
State-Space Searches.
Using Abstraction to Speed Up Search Robert Holte University of Ottawa.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Search by partial solutions.  nodes are partial or complete states  graphs are DAGs (may be trees) source (root) is empty state sinks (leaves) are complete.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
A General Introduction to Artificial Intelligence.
Informed Search CSE 473 University of Washington.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Branch and Bound Searching Strategies
Adversarial Search 2 (Game Playing)
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Search: Advanced Topics Computer Science cpsc322, Lecture 9
Artificial Intelligence Problem solving by searching CSC 361
Search: Advanced Topics Computer Science cpsc322, Lecture 9
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
EA C461 – Artificial Intelligence
CSE 473 University of Washington
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
CS 416 Artificial Intelligence
Presentation transcript:

Non-Conservative Cost Bound Increases in IDA* Doug Demyen

Outline ► Introduction (Recap of IDA*) ► Advantages/ Disadvantages of IDA* ► Alternate approach ► Advantages/ Disadvantages ► Example: Travelling Salesman Problem ► TSP Representation ► Creating the Pattern Database ► Methods for Increasing the Depth Bound ► Experiment Setup and Parameters ► Preliminary Results ► Conclusion

IDA* Recap ► Search algorithm used for finding an optimal solution to a problem ► Slower than A*, but requires is O(bd) memory whereas A* requires O(b d )  where b is the branching factor of the space and d is the depth of the goal ► Usually used when there is insufficient memory to run A*

IDA* Recap (Cont’d) ► For a node N, define:  g(N) = the distance from the start state to N  h(N) = an “at most” estimate of the distance from N to the goal state  f(N) = g(N) + h(N) ► Define also a depth bound Ө for IDA* ► When the algorithm begins, Ө = h(Start)

IDA* Recap (Cont’d) ► Does a depth-first search on all nodes N where f(N) ≤ Ө ► If the goal is not found in this iteration, updates Ө to be the minimum f(N) of the nodes N that were generated but not expanded, and searches again ► That is, Ө i+1 := min N {f(N)}  for N є {N i | f(N i ) > Ө i, N i є Succ(N j ), f(N j ) ≤ Ө i }

Advantages ► Won’t search any deeper than the goal:  first path found to goal is optimal  avoids expanding any extra “levels” ► Best method when a great number of nodes share the f-values (especially in an exponential state spaces)

Disadvantages ► In the worst case, each node has a different f-value – number of iterations will be O(n)  where n is the number of nodes in the space ► This is disastrous ► In this case, we want to update Ө to include more than one more f-value in each iteration

Dangers of this Approach ► Dangers of incrementing Ө in this way stem from the possibility that in the last iteration, Ө > g(Goal):  The first time the goal is found, it might not be by an optimal path  Searching deeper in an exponential space, one could expand more nodes deeper than the goal than leading up to the goal

Converging to Optimal ► Although the first path to the goal might not be optimal, we can find the optimal path:  when we find a path to the goal, Ө := g(Goal)-1  continue searching in the current iteration for other paths to the goal, updating Ө similarly  when all nodes N with f(N) ≤ Ө have been expanded, the last (shortest) path to the goal must be optimal

The Traveling Salesman Problem ► One problem on which IDA* has classically performed poorly is the TSP ► Involves a number of cities with a distance (or cost) between each pair ► In the non-fully-connected problem, distances between unconnected states can be thought of as infinite ► The cost of traveling between two cities can be the same in both directions (symmetrical) or different (asymmetrical)

The TSP (Cont’d) ► Want to visit each of the cities and return to the starting city while incurring as little cost as possible

TSP Representation ► Distances (or costs) between cities are represented in a matrix as above ► In the symmetrical TSP the matrix is symmetrical From To abcd a-467 b5-38 c72-9 d367-

TSP Representation (Cont’d) ► The state of the agent is defined as a two- tuple of a set and an atom: ({a, c}, b) ► Representing the visited cities and the current location, respectively ► If we consider a to be the starting city:  The start state is ({}, a)  The goal state is ({a, b, c, d}, a)

Building PDBs with TSPs ► Similarly to other problems, abstract cities within a TSP to the same constant  Ex: Φ: {a, b, c, d}  {a, x, x, d} ► When traveling from a city to another, take the distance to be the minimum of the entries in the rows with cities mapped to the same constant as the origin city and the columns with cities mapped to the same constant as the destination city

Example abcd a-467 b5-38 c72-9 d367-ada-7 d3- 4 x x

Creating the PDB ► Moves are unidirectional, not invertible ► Easiest to enumerate the state space in the forward direction, then when the goal is reached, for each node N in the path from Start  Goal:  h(N) := min {h(N), h(Goal) – g(N)}  where initially h(N) = ∞, for all nodes N ► For this example, goal = ({a, x, x, d}, a)

Increasing the Depth Bound ► Several alternative methods have already been created for updating the depth-bound:  DFS* - double the depth bound each iteration  IDA*_CR - classify pruned nodes into “buckets” and increase the depth bound to include enough buckets containing a predefined number of nodes  RIDA* - uses regression to set the depth bound so the estimated increase in nodes expanded next iteration is constant

More Methods ► I am testing DFS* and IDA*_CR, along with a number of other methods:  Multiply the IDA* depth bound by some constant (for example, 1.5)  Increase the depth bound to include a percentage of the “fringe” nodes (for example, 50% = median, 100% = maximum)  Increase the depth bound to include a constant number of the fringe nodes  More?

Experiment Setup ► Currently using the 10-city TSP ► Use an abstraction for the space (example: Φ({abcdefghij}) = {VVWWXXYYZZ}) ► Populate a distance matrix randomly (either symmetrical or asymmetrical) ► Enumerate the space to populate the pattern database

Experiment Setup (Cont’d) ► Run IDA* using each of the different depth bound updating techniques ► For each technique, record:  length of the first solution  time expired and nodes expanded in reaching it  time expired and nodes expanded in reaching the optimal solution  time expired and nodes expanded by the end of the algorithm

Variables to Manipulate ► I will try symmetrical and asymmetrical TSP ► Several different abstractions for the PDB ► Different parameters for methods (for example, include 8 fringe nodes) ► Possibly different upper bounds on inter-city distances

Preliminary Results Method First Solution Optimal Solution Final Length Time (s) # Nodes Time (s) # Nodes Time (s) # Nodes Standard IDA* Ө min + 50% Median Maximum DFS* th Least on Fringe IDA*_CR (r = 5)

Results so far ► Results taken from 40 runs of asymmetric 10-city TSP with a PDB using the domain abstraction: Φ({abcdefghij}) = {VVWWXXYYZZ} (paired) ► DFS*, Ө min +50%, and the maximum fringe f-value produce very similar results: long first solution paths found very quickly ► Interestingly, setting the depth bound to the 5 th lowest fringe f-value always finds the optimal path first, faster than IDA* ► Other techniques form a middle ground in speed and initial solution path length

Conclusion ► In a state space like the TSP, non- conservative depth bound increments perform much better than standard IDA* ► Despite the “trade-off” between speed and initial solution length, in my experiments, non-conservative methods still find the optimal solution more than 100 times after than standard IDA* ► More to come...

References ► B. W. Wah and Y. Shang, A Comparative Study of IDA*-Style Searches, Proc. 6 th Int’l Conference on Tools with Artificial Intelligence, IEEE, Nov. 1994, pp ► R. E. Korf, Space-Efficient Search Algorithms, ACM Computing Surveys (CSUR), Sept. 1995, pp

Questions?