Properties of Heuristics that Guarantee A* Finds Optimal Paths Robert Holte this talk:

Slides:



Advertisements
Similar presentations
Problem solving with graph search
Advertisements

Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Heuristic State Space Seach Henry Kautz. Assignment.
State Space Search Algorithms CSE 472 Introduction to Artificial Intelligence Autumn 2003.
Problem Solving by Searching
Review: Search problem formulation
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Lecture 3 Informed Search CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Dijkstra’s Algorithm: Notes to Complement.
Informed Search Idea: be smart about what paths to try.
Dijkstra’s Algorithm and Heuristic Graph Search David Johnson.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
COMP261 Lecture 7 A* Search. A* search Can we do better than Dijkstra's algorithm? Yes! –want to explore more promising paths, not just shortest so far.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
Advanced Artificial Intelligence Lecture 2: Search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
Informed Search CSE 473 University of Washington.
Heuristic Search Foundations of Artificial Intelligence.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
1 CO Games Development 1 Week 8 A* Gareth Bellaby.
Honors Track: Competitive Programming & Problem Solving Finding your way with A*-algorithm Patrick Shaw.
Solving problems by searching Uninformed search algorithms Discussion Class CS 171 Friday, October, 2nd (Please read lecture topic material before and.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Abstraction Transformation & Heuristics
Heuristic Search Henry Kautz.
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Informed search algorithms
Review session covering text book pages and
Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
(1) Breadth-First Search  S Queue S
CSE 473 University of Washington
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
Artificial Intelligence
CS 416 Artificial Intelligence
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Informed Search Idea: be smart about what paths to try.
Informed Search.
Supplemental slides for CSE 327 Prof. Jeff Heflin
“If I Only had a Brain” Search
Presentation transcript:

Properties of Heuristics that Guarantee A* Finds Optimal Paths Robert Holte this talk:

Best-first Search Open list of nodes reached but not yet expanded Closed list of nodes that have been expanded Choose lowest cost node on Open list Add it to Closed, add its successors to Open Stop when Goal is first removed from Open Dijkstra: cost, f(N) = g(N) = distance from start A*: cost, f(N) = g(N) + h(N)

A* must re-open closed nodes OPEN: (S,70) CLOSED: CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (A,120), (B,40) CLOSED: (S,70) CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (A,120), (C,110) CLOSED: (S,70), (B,40) CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (A,120), (D,110) CLOSED: (S,70), (B,40), (C,110) CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (A,120), (G,140), (subtree with f=110) CLOSED: (S,70), (B,40), (C,110), (D,110) CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (A,120), (G,140) CLOSED: (S,70), (B,40), (C,110), (D,110), … CDGS A B h=20 h=110 h=70 h=60 f = 110

A* must re-open closed nodes OPEN: (G,140), (C,90) CLOSED: (S,70), (B,40), (C,110), (D,110), …(A,120) CDGS A B h=20 h=110 h=70 h=60 f = 110

Today’s Question When a node is first removed from Open, under what conditions are we guaranteed that this path to the node is optimal ? Dijkstra: all edge-weights are non-negative A*: the heuristic must have certain properties

Optimal Path to goal is the first off the Open list S-N-G optimal, is on Open on Open is suboptimal g*(N)+h*(N) < P  h*(N) < P – g*(N) SG N P g*(N) h*(N)

Admissible Heuristic Require lower cost than g*(N)+h(N) < P  h(N) < P – g*(N)  h(N)  h*(N) (because h*(N) < P – g*(N)) A heuristic is admissible if h(N)  h*(N) for all N. Admissible  first path to goal off Open is optimal SG N P g*(N) h*(N)

Optimal Path to X is the first off the Open list, for all X S-N-X optimal, is on Open on Open, P is suboptimal g*(N)+c(N,X) < P  c(N,X) < P – g*(N) SX N P g*(N) c(N,X)

Consistent Heuristic Require lower cost than g*(N)+h(N) < P+h(X)  h(N) – h(X) < P – g*(N)  h(N) – h(X)  c(N,X) (because c(N,X) < P – g*(N)) A heuristic is consistent if h(N)  c(N,X) + h(X) for all X and all N. Consistent  first path to X off Open is optimal for all X SX N P g*(N) c(N,X)

Transforming heuristics into edge weights Aim: replace the given edge weights and heuristics values with a set of edge weights (and NO heuristic) so that Dijkstra-costs on the new graph are identical to A*-costs on the given graph+heuristic SAB ab A* cost:h(S)a+h(A)a+b+h(B)

Transformation - goal SAB ab A* cost:h(S)a+h(A)a+b+h(B) SAB ?? Dijkstra cost : h(S)a+h(A)a+b+h(B)

Transformation (1) SAB ab A* cost:h(S)a+h(A)a+b+h(B) SAB ?? Dijkstra cost : h(S)a+h(A)a+b+h(B) h(S)

Transformation (2) SAB ab A* cost:h(S)a+h(A)a+b+h(B) SAB a+h(A)-h(S) ?? Dijkstra cost : h(S)a+h(A)a+b+h(B) h(S)

Transformation (3) SAB ab A* cost:h(S)a+h(A)a+b+h(B) SAB a+h(A)-h(S) b+h(B)-h(A) Dijkstra cost : h(S)a+h(A)a+b+h(B) h(S)

Transformation - general The order in which nodes come off the Open list using Dijkstra on the transformed graph is identical to the order using A* on the original graph+heuristic. c(N,X) NX + heuristic is transformed into c(N,X) – h(N) + h(X) NX (no heuristic)

Local Consistency If edge weights are non-negative, the first path to any node Z that Dijkstra takes off Open is an optimal path to Z. Non-negative edge weights requires: For all N, and all successors, X, of N 0  c(N,X) – h(N) + h(X)  h(N)  c(N,X) + h(X) A heuristic is locally consistent if h(N)  c(N,X) + h(X) for all N and all successors X of N. Locally consistent  consistent c(N,X) – h(N) + h(X) NX

Monotonicity With Dijkstra and non-negative edge weights, cost cannot decrease along a path since it is just the sum of the edge weights along the path. Because A* with a consistent heuristic is equivalent to Dijkstra with non-negative edge weights, it follows that A*costs along a path can never decrease if the heuristic is consistent. SAB A* cost: f(S)  f(A)  f(B)

Admissibility  Monotonicity Along path S-A-C, f-values are not monotonic non-decreasing. / CDGS A B h=20 h=110 h=70 h=60 f = 110

Enforced monotonicity Can enforce monotonicity along a path by using parent’s f-value if it is greater than the child’s f- value. (valid if h is admissible because the f values on a path never overestimate the path’s true length) But this does not solve the problem of having to re-open closed nodes in our example. use f = 8 from parent f = 8f = 4

Summary of definitions An admissible heuristic never overestimates distance to goal A consistent heuristic obeys a kind of triangle inequality With a locally consistent heuristic, h does not decrease faster than g increases Monotonicity: costs along a path never decrease

Summary of Positive Results Consistent  locally consistent Consistent  monotonicity Consistent  admissible Consistent  first path to X off Open is optimal, for all X Admissible  first path to Goal off Open is optimal (correctness of the A* stopping condition)

Summary of Negative Results Admissible  monotonicity Admissible  consistent Admissible  first path to X off Open is optimal, for all X / / /