Iterative Deepening A* (IDA*) Search: Rough Algorithm from LISP

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Blind Search by Prof. Jean-Claude Latombe
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
UnInformed Search What to do when you don’t know anything.
Introduction to Artificial Intelligence Blind Search Ruth Bergman Fall 2004.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Introduction to Artificial Intelligence Blind Search Ruth Bergman Fall 2002.
Informed search methods Tuomas Sandholm Computer Science Department Carnegie Mellon University Read Chapter 4 of Russell and Norvig.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Lecture 2-2CS250: Intro to AI/Lisp Implementing Search Lecture 2-2 January 14 th /19 th, 1999 CS250.
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
More advanced aspects of search Extensions of A*.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Artificial Intelligence
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed Search Methods Heuristic = “to find”, “to discover” “Heuristic” has many meanings in general How to come up with mathematical proofs Opposite.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
Lecture 5-1CS250: Intro to AI/Lisp “If I Only had a Brain” Search Lecture 5-1 October 26 th, 1999 CS250.
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Search as a problem solving technique. Consider an AI program that is capable of formulating a desired goal based on the analysis of the current world.
Implementation: General Tree Search
Local Search Algorithms and Optimization Problems
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
State Space Search and Planning General Game PlayingLecture 4 Michael Genesereth Spring 2005.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Modern Programming Languages Lecture 20 Fakhar Lodhi
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
“If I Only had a Brain” Search II
Introduction to Artificial Intelligence
Uninformed Search Strategies
Artificial Intelligence (CS 370D)
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Prof. Dechter ICS 270A Winter 2003
Artificial Intelligence Informed Search Algorithms
What to do when you don’t know anything know nothing
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Artificial Intelligence
ALGORITHMS.
Modern Programming Languages Lecture 20 Fakhar Lodhi
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
CSE 473 University of Washington
More advanced aspects of search
Iterative Deepening and Branch & Bound
Heuristic Search Generate and Test Hill Climbing Best First Search
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Presentation transcript:

Iterative Deepening A* (IDA*) Search: Rough Algorithm from LISP function IDA root <- start node f-limit <- fcost(root) solution <- NIL loop DFS_Contour(root, f-limit) if not null(solution) then return(solution) if f-limit = infinity then return(NIL) end loop end IDA function DFS_Contour(node, f-limit) returns (solution, f-limit) next-f <- infinity if fcost(node) > f-limit then mreturn(NIL, fcost(node)) else if goal(node) then mreturn(node, f-limit) else for each successor s of node new-f <- fcost(s) DFS-Contour(s, f-limit) if not null(solution) then mreturn(solution, f-limit) else next-f <- min(next-f, new-f) mreturn(NIL next-f) end DFS_Contour

;;; ida.lisp ;;;; Iterative Deepening A* (IDA*) Search (setf (problem-iterative? problem) t) (let* ((root (create-start-node problem)) (f-limit (node-f-cost root)) (solution nil)) (loop (multiple-value-setq (solution f-limit) (DFS-contour root problem f-limit)) (dprint "DFS-contour returned" solution "at" f-limit) (if (not (null solution)) (RETURN solution)) (if (= f-limit infinity) (RETURN nil))))) (defun DFS-contour (node problem f-limit) "Return a solution and a new f-cost limit." (let ((next-f infinity)) (cond ((> (node-f-cost node) f-limit) (values nil (node-f-cost node))) ((goal-test problem (node-state node)) (values node f-limit)) (t (for each s in (expand node problem) do (multiple-value-bind (solution new-f) (DFS-contour s problem f-limit) (if (not (null solution)) (RETURN-FROM DFS-contour (values solution f-limit))) (setq next-f (min next-f new-f)))) (values nil next-f)))))

Algorithm Complete Optimal Time Space Greedy BFS YES if finite NO exp exp A* YES YES if admis. exp exp IDA* YES YES if admis. exp O(d) O(1)* DF B&B YES if finite YES exp O(d) Global Beam NO NO exp O(kd) S.A. Hillclimb NO NO O(bd) O(d) O(1)+ Sim. Annealing NO NO O(bd) O(d) Local Beam NO NO O(kbd) O(k) Genetic NO NO O(k x numiter) O(k) * IDA uses O(d) for DFS, but only keeps f-limit between iterations. + O(1) if we don’t save the path to the solution.