CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

An Introduction to Artificial Intelligence
Classic AI Search Problems
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.4) January, 14, 2009.
Search Strategies Reading: Russell’s Chapter 3 1.
CS 484 – Artificial Intelligence1 Announcements Department Picnic: today, after class Lab 0 due today Homework 2 due Tuesday, 9/18 Lab 1 due Thursday,
Chapter 4 Search Methodologies.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Pruned Search Strategies CS344 : AI - Seminar 20 th January 2011 TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof.
Search in AI.
Mahgul Gulzai Moomal Umer Rabail Hafeez
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Review: Search problem formulation
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
1 Chapter 4 Search Methodologies. 2 Chapter 4 Contents l Brute force search l Depth-first search l Breadth-first search l Properties of search methods.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
RBFS is a linear-space algorithm that expands nodes in best-first order even with a non-monotonic cost function and generates fewer nodes than iterative.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CSC3203: AI for Games Informed search (1) Patrick Olivier
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
Fahiem Bacchus © 2005 University of Toronto 1 CSC384: Intro to Artificial Intelligence Search II ● Announcements.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Adversarial Search 2 (Game Playing)
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Brian Williams, Fall 041 Analysis of Uninformed Search Methods Brian C. Williams Sep 21 st, 2004 Slides adapted from: Tomas Lozano Perez,
ARTIFICIAL INTELLIGENCE Dr. Seemab Latif Lecture No. 4.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
EA C461 – Artificial Intelligence
Pruned Search Strategies
CSE 473 University of Washington
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Presentation transcript:

CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms

What we’re covering… 1. Introduction 2. Korf’s analysis of IDA* 3. Russell’s criticism of IDA* 4. Russell’s solution to memory-bounded search

Introduction Two types of search algorithms:  Brute force (breadth-first, depth-first, etc.)  Heuristic (A*, heuristic depth-first, etc.) Measure of optimality – The Fifteen Puzzle! I NTRODUCTION

Korf’s Analysis A few definitions…  Node branching factor (b): The number of new states that are generated by the application of a single operator to a given state, averaged over all states in the problem space  Edge branching factor (e): The average number of different operators which are applicable to a given state (i.e. how many edges are coming out of a node)  Depth (d): Length of the shortest sequence of operators that map the initial state to a goal state K ORF ’ S A NALYSIS

Brute Force Algorithm Analysis Breadth-first search  Expands all nodes from the initial state, until a goal state is reached.  Pros:  Always finds the shortest path to the goal state.  Cons:  Requires time O(b d )  SPACE! – O(b d ) nodes must be stored!  Most problem spaces exhaust memory far before a goal is reached (in 1985 anyway). K ORF ’ S A NALYSIS

Brute Force Algorithm Analysis Depth first search  Expands a path to depth d before expanding any other path, until a goal state is reached.  Pros:  Requires little space; only the current path from the initial node must be stored, O(d)  Cons:  Does not typically find the shortest path  Takes time O(e d )!  If a depth cutoff is not set, the algorithm may never terminate K ORF ’ S A NALYSIS

Brute Force Algorithm Analysis Depth-first iterative-deepening  Perform a depth-first search at depth 1, then depth 2,... all the way to depth d  Pros:  Optimal time, O(b d ) [proof]proof  Optimal space, O(d), since it is performing depth-first search, and never searches deeper than d  Always finds the shortest path  Cons  Wasted computation time at depths not containing the goal Proven to not affect asymptotic performance (next slide)  Must explore all possible paths to a given depth K ORF ’ S A NALYSIS

Brute Force Algorithm Analysis Branching factor vs. constant coefficient as search depth -> infinity K ORF ’ S A NALYSIS

Increasing Optimality with Bi-Directional Search DFID with Bi-Directional search  Depth-first search up to depth k from start node; 2 depth-first searches from goal node up to depth k and k+1  Performance  Space, solution of length d, O(b d/2 )  Time, O(b d/2 ) K ORF ’ S A NALYSIS

IDA* A*, like depth-first search, except based on increasing values of total cost rather than increasing depths IDA* sets bounds on the heuristic cost of a path, instead of depth A* always finds a cheapest solution if the heuristic is admissible  Extends to a monotone admissible function as well  Korf, Lemma 6.2  Also applies to IDA*  Korf, Lemma 6.3Lemma 6.3 K ORF ’ S A NALYSIS

IDA* IDA* is optimal in terms of solution cost, time, and space for admissible best-first searches on a tree  With an admissible monotone heuristic.  Korf, Theorem 6.4  IDA* expands the same number of nodes, asymptotically, as A*.  A* proven to be optimal for nodes expanded. K ORF ’ S A NALYSIS

IDA* vs. A* Fifteen Puzzle with Manhattan distance heuristic IDA* generates more nodes than A*, but runs faster. Initial StateEstimateActualTotal Nodes ,369,596,778 K ORF ’ S A NALYSIS

Other conclusions… Also optimal for two-player games  Can search deeper in the tree at optimal time  Can be used to order nodes, so alpha-beta cutoff is more efficient – only possible with ID K ORF ’ S A NALYSIS

Russell’s Criticism A* must store all nodes in an open list  A good implementation of the Fifteen Puzzle will run out of memory (on a 64 MB machine – this is a small issue now) Memory-bounded variants developed  Problems:  Ensuring an optimal solution  Avoiding re-expansion of nodes (wasted computation) R USSELL ’ S C RITICISM

Russell’s Criticism In worst-case scenarios (and for large problems), IDA* is sub-optimal compared to A*  Worst case = every node has a different f-cost  If A* examines k-nodes [O(k)], then IDA* examines k 2 -nodes [O(k 2 )]  Unacceptable slowdown for large k  Evident in real-world problems, such as Traveling Salesman IDA* retains no path information between iterations R USSELL ’ S C RITICISM

Russell’s Solutions to MB Searches MA*  Once a preset limit is reached (in memory), the algorithm prunes the open list by highest f-cost SMA*  Improves upon MA* by:  1. Using a more efficient data structure for the open list (binary tree), sorted by f-cost and depth  2. Only maintaining two f-cost quantities (instead of four with MA*)  3. Pruning one node at a time (the worst f-cost)  4. Retaining backed-up f-costs for pruned paths R USSELL ’ S S OLUTIONS TO M EMORY B OUNDED S EARCH

SMA* Algorithm SMA* - If memory can only hold 10 nodes. R USSELL ’ S S OLUTIONS TO M EMORY B OUNDED S EARCH

Properties of SMA* Maintain f-costs of the best path (lower bound) The best lower bound node is always expanded Guaranteed to return an optimal solution  MAX must be big enough to hold the shortest path Behaves identical to A* if MAX > number of nodes generated R USSELL ’ S S OLUTIONS TO M EMORY B OUNDED S EARCH

IE Algorithm IE – All but the current best path and sibling nodes are pruned away. Otherwise, similar to SMA*, until the bound is exceeded. Very similar to best-first search as well.

IE Algorithm Example Labels are f-cost / bound

Russell’s Performance Tests Used a “perturbed 8-puzzle” as opposed to Korf’s 15- puzzle test  Small perturbations on Manhattan-distance heuristic  This is to ensure each node has a different f-cost  Run on a Macintosh Powerbook 140 w/ 4MB RAM SMA* vs. A* vs. IE vs. IDA*

Russell’s Performance Results

Breadth-First Heuristic Search Storing all open and closed nodes, as in A*, allows  Reconstruction of the optimal solution path  Detection of duplicate node expansions Sacrificing one of these reduces required memory Variations of A* such as DFIDA* and RBFS give up duplicate detection  In doing so, such algorithms convert graph-search into tree- search  For complex problems in which a given state may be reached through many paths, these algorithms perform poorly

Breadth-First Heuristic Search Second strategy: maintain duplicate detection, but give up traceback solution reconstruction

Proofs ID Optimality proof (Korf) ID  To see that this is optimal, we present a simple adversary argument. The number of nodes at depth d is bd. Assume that there exists an algorithm that examines less than bd nodes. Then, there must exist at least one node at depth d which is not examined by this algorithm. Since we have no additional information, an adversary could place the only solution at this node and hence the proposed algorithm would fail. Hence, any brute-force algorithm must take at least cbd time, for some constant c.

Proofs IDA* optimal solution IDA*  Therefore, since IDA* always expands all nodes at a given cost before expanding any nodes at a greater cost, the first solution it finds will be a solution of least cost.

References Korf, Richard E. Depth-First Iterative-Deepening: An Optimal Admissible Tree Search. Russell, Stuart. Efficient memory-bounded serach methods. Zhou, Rong. A Breadth-First Approach to Memory- Efficient Graph Search.