A* and D* Search Kevin Tantisevi. Outline Overview of Search Techniques A* Search D* Search.

Slides:



Advertisements
Similar presentations
AI Pathfinding Representing the Search Space
Advertisements

Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
ICS-171:Notes 4: 1 Notes 4: Optimal Search ICS 171 Summer 1999.
Artificial Intelligence (CS 461D)
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
Review: Search problem formulation
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Informed Search (no corresponding text chapter). Recall: Wanted " An algorithm and associated data structure(s) that can: 1) Solve an arbitrary 8-puzzle.
1 PHA*:Performing A* in Unknown Physical Environments Ariel Felner Bar-Ilan University. Joint work with Roni Stern and Sarit Kraus.
Using Search in Problem Solving
Informed Search CSE 473 University of Washington.
Problem Solving and Search in AI Heuristic Search
Using Search in Problem Solving
5-Nov-2003 Heuristic Search Techniques What do you do when the search space is very large or infinite? We’ll study three more AI search algorithms: Backtracking.
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Informed Search Strategies
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
P ROBLEM Write an algorithm that calculates the most efficient route between two points as quickly as possible.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
COMP261 Lecture 7 A* Search. A* search Can we do better than Dijkstra's algorithm? Yes! –want to explore more promising paths, not just shortest so far.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Computer Science CPSC 322 Lecture 6 Iterative Deepening and Search with Costs (Ch: 3.7.3, 3.5.3)
Artificial Intelligence Problem solving by searching.
Real-time motion planning for Manipulator based on Configuration Space Chen Keming Cis Peking University.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Informed Search CSE 473 University of Washington.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
For Monday Read chapter 4 exercise 1 No homework.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
ARTIFICIAL INTELLIGENCE Dr. Seemab Latif Lecture No. 4.
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Artificial Intelligence Problem solving by searching CSC 361
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
The A* Algorithm Héctor Muñoz-Avila.
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
A General Backtracking Algorithm
Chap 4: Searching Techniques
Informed Search Idea: be smart about what paths to try.
Artificial Intelligence
CS 416 Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
“If I Only had a Brain” Search
Presentation transcript:

A* and D* Search Kevin Tantisevi

Outline Overview of Search Techniques A* Search D* Search

Search in Path Planning Find a path between two locations in an unknown, partially known, or known environment Search Performance –Completeness –Optimality → Operating cost –Space Complexity –Time Complexity

Search Uninformed Search –Use no information obtained from the environment –Blind Search: BFS (Wavefront), DFS Informed Search –Use evaluation function –More efficient –Heuristic Search: A*, D*, etc.

Uninformed Search Graph Search from A to N BFS

Informed Search: A* Notation n → node/state c(n 1,n 2 ) → the length of an edge connecting between n 1 and n 2 b(n 1 ) = n 2 → backpointer of a node n 1 to a node n 2.

Informed Search: A* Evaluation function, f(n) = g(n) + h(n) Operating cost function, g(n) –Actual operating cost having been already traversed Heuristic function, h(n) –Information used to find the promising node to traverse –Admissible → never overestimate the actual path cost

A*: Algorithm The search requires 2 lists to store information about nodes 1)Open list (O) stores nodes for expansions 2)Closed list (C) stores nodes which we have explored

A*: Example (1/6) Heuristics A = 14 H = 8 B = 10 I = 5 C = 8 J = 2 D = 6 K = 2 E = 8 L = 6 F = 7 M = 2 G = 6 N = 0 Legend operating cost

A*: Example (2/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0

A*: Example (3/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0 Since A → B is smaller than A → E → B, the f-cost value of B in an open list needs not be updated

A*: Example (4/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0

A*: Example (5/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0

A*: Example (6/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0 Since the path to N from M is greater than that from J, the optimal path to N is the one traversed from J

A*: Example (6/6) Heuristics A = 14, B = 10, C = 8, D = 6, E = 8, F = 7, G = 6 H = 8, I = 5, J = 2, K = 2, L = 6, M = 2, N = 0 Since the path to N from M is greater than that from J, the optimal path to N is the one traversed from J

A*: Example Result Generate the path from the goal node back to the start node through the back-pointer attribute

A*: Performance Analysis Complete provided the finite boundary condition and that every path cost is greater than some positive constant  Optimal in terms of the path cost Memory inefficient → IDA* Exponential growth of search space with respect to the length of solution How can we use it in a partially known, dynamic environment?

A* Replanner – unknown map Optimal Inefficient and impractical in expansive environments – the goal is far away from the start and little map information exists (Stentz 1994) How can we do better in a partially known and dynamic environment?

D* Search (Stentz 1994) Stands for “Dynamic A* Search” Dynamic: Arc cost parameters can change during the problem solving process— replanning online Functionally equivalent to the A* replanner Initially plans using the Dijkstra’s algorithm and allows intelligently caching intermediate data for speedy replanning

D* Algorithm X, Y → states of a robot b(X) = Y → backpointer of a state X to a next state Y c(X,Y) → arc cost of a path from X to Y t(X) → tag (i.e. NEW,OPEN, and CLOSED) of a state X h(G,X) ~ h(X) → path cost function: sum of the arc costs from X to the goal state G

D* Algorithm PROCESS-STATE() –Compute optimal path to the goal –Initially set h(G) = 0 and insert it into the OPEN list –Repeatedly called until the robot’s state X is removed from the OPEN list MODIFY-COST() –Immediately called, once the robot detects an error in the arc cost function (i.e. discover a new obstacle) –Change the arc cost function and enter affected states on the OPEN list

D* Algorithm k(G,X) → the priority of the state in an open list LOWER state –k(G,X) = h(G,X) –Propagate information about path cost reductions (e.g. due to a reduced arc cost or new path to the goal) to its neighbors –For each neighbor Y of X, if h(Y) > h(X) + c(X,Y) then Set h(Y) := h(X) + c(X,Y) Set b(Y) = X Insert Y into an OPEN list with k(Y) = h(Y) so that it can propagate cost changes to its neighbors

D* Algorithm RAISE state –k(G,X) < h(G,X) –Propagate information about path cost increases (e.g. due to an increased arc cost) to its neighbors –For each neighbor Y of a RAISE state X, If t(Y) = NEW or (b(Y) = X and h(Y) ≠ h(X) + c(X,Y)) then insert Y into the OPEN list with k(Y) = h(X)+c(X,Y) Else if (b(Y) ≠ X and h(Y) > h(X) + c(X,Y)) then insert X into the OPEN list with k(X) = h(X) Else if (b(Y) ≠ X and h(X) > h(Y) + c(X,Y)) then insert Y into the OPEN list with k(Y) = h(Y)

x2 D* Example (1/16) The robot moves in 8 directions The arc cost values, c(X,Y) are small for clear cells and are prohibitively large for obstacle cells x7x6x7 x5x1x8 x3x9 Horizontal/Vertical TraversalDiagonal Traversal Free cell (e.g. c(X1,X2)) = 1Free cell (e.g. c(X1,X3)) = 1.4 Obstacle cell (e.g. c(X1,X8)) = 10000Obstacle cell (e.g. c(X1,X9)) =

D* Example (2/16)

D* Example (3/16)

D* Example (4/16)

D* Example (5/16)

D* Example (6/16)

D* Example (7/16)

D* Example (8/16)

D* Example (9/16)

D* Example (10/16)

D* Example (11/16)

D* Example (12/16)

D* Example (13/16)

D* Example (14/16)

D* Example (15/16)

D* Example (16/16)

D* Performance Optimal Complete More efficient than A* replanner in expansive and complex environments –Local changes in the world do not impact on the path much –Most costs to goal remain the same –It avoids high computational costs of backtracking

Application of D* Planetary Exploration (Mars) Search and Rescue Financial Markets Artificial Intelligence—Graph search problem

Focused D* Introduce the focussing heuristic g(X,R) being an estimated path cost from robot location R to X f(X,R) = k(X)+g(X,R) is an estimated path cost from R through X to G Instead of k(X), sort the OPEN list by biased function values, f B (X,Ri) = f(X,R i )+d(R i,R 0 ), where d(R i,R 0 ) is the accrued bias function –Focussed D* assumes that the robot generally moves a little after it discovers discrepancies

Performance of Focussed D* Offline time = CPU time required to compute the initial path from the goal to the robot Online time = CPU time required for all replanning FD*F = Focussed D* with full initialization FD*M = Focussed D* with minimal initialization

Source Citation Atkar, P. N. and V. S. Lee Jr. (2001). “D* Algorithm”, Presentation slide in sensor-based robot motion planning, CMU, October, Russell, S. and P. Norvig (1995). Artificial Intelligence: A modern approach, Prentice Hall. Sensor-based robot motion planning lecture (2002). Stentz, A. (1994). "Optimal and efficient path planning for partially-known environments." Proceedings of the IEEE International Conference on Robotics and Automation, May, Stentz, A. (1995). “The focussed D* algorithm for real- time replanning.” Proceedings of the International Joint Conference on Artificial Intelligence, August, 1995.