Heuristic Search Andrea Danyluk September 16, 2013.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Informed Search Algorithms
Informed search strategies
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture– 4, 5, 6: A* properties 9 th,10 th and 12 th January,
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Recent Progress in the Design and Analysis of Admissible Heuristic Functions Richard E. Korf Computer Science Department University of California, Los.
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Artificial Intelligence
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Problem Solving and Search in AI Heuristic Search
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Informed Search Idea: be smart about what paths to try.
Problem Solving and Search Andrea Danyluk September 11, 2013.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
Computer Science CPSC 322 Lecture A* and Search Refinements (Ch 3.6.1, 3.7.1, 3.7.2) Slide 1.
Informed (Heuristic) Search
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
CS 312: Algorithm Design & Analysis Lecture #37: A* (cont.); Admissible Heuristics Credit: adapted from slides by Stuart Russell of UC Berkeley. This work.
CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Warm-up: Tree Search Solution
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
CS621: Artificial Intelligence
Heuristics Local Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Informed Search.
Presentation transcript:

Heuristic Search Andrea Danyluk September 16, 2013

Announcements Programming Assignment 1: Search – In progress Programming Assignment 0: Tutorial – Pick up graded assignment, if you haven’t already Paper responses – Turn in before you leave This lecture contains extra (hidden) slides that you might find interesting/helpful.

Today A bit more on A* Heuristics “Robust Bidirectional Search via Heuristic Improvement”

A* Search A* search – Orders nodes by the sum: f(n) = g(n) + h(n) g(n) is backward cost (to start node) h(n) is forward cost (to closest goal) – Optimality requirements Tree search (avoid cycles, but no other notion of “explored” data structure): Heuristic must be admissible – Never overestimates the cost to the goal

A* Graph Search Gone Wrong a a b b c c d d g g h=0 h=1 h=2 h=4 h=1 Heuristic admissible? What happens? [Adapted from CS 188 UC Berkeley]

A* Search A* search – Orders nodes by the sum: f(n) = g(n) + h(n) g(n) is backward cost (to start node) h(n) is forward cost (to closest goal) – Optimality requirements Graph Search (keeps track of “explored”): Heuristic must be consistent – If n 1 is a successor of n generated by action a » h(n) ≤ c(n, a, n 1 ) + h(n 1 ) » if an action has cost c, then taking that action can only cause a drop in heuristic of at most c

Lemma. If h(n) is consistent, then the values of f(n) along any path are nondecreasing. Proof: Let n 1 be a successor of n generated by action a. Then g(n 1 ) = g(n) + c(n, a, n 1 ) f(n 1 ) = g(n 1 ) + h(n 1 ) = g(n) + c(n, a, n 1 ) + h(n 1 ) ≥ g(n) + h(n), by definition of consistency

Theorem. When A* selects a node for expansion (and marks it “explored”), the optimal path to that node has been found. Proof: Assume the contrary. Say that a node n 1 has been selected for expansion, but the optimal path to that node has not been found. If n 1 has been selected for expansion, it must be the case that f(n 1 ) ≤ f(p) for every node p on the optimal path to n that is currently on the frontier. If the path found is suboptimal, it must be the case that g(n 1 ) > g*(n 1 ), where g* is the optimal cost of reaching n 1 through p. Now, we know that our heuristic is consistent, so f*(n 1 ) ≥ f(p). But since h(n 1 ) is the same no matter how you get to n 1, this means g(n 1 ) + h(n 1 ) > g*(n 1 ) + h(n 1 ) = f*(n 1 ) ≥ f(p), but then f(n 1 ) > f(p), a CONTRADICTION.

Search Demos

Heuristics The closer h is to the true cost to the goal, the better A* will perform. Better heuristic functions can take more time to compute. – Trading off heuristic computation time for search time. How to design admissible/consistent heuristics? How to make them efficient to compute?

Generating Heuristics Calculate the cost to the goal for a relaxed version of the problem – Eight-puzzle (or any sliding tile puzzle) Count moves of tiles without taking into account any tiles in the way Count the number of tiles out of place, as if you can lift each tile and place it, without following the rules for movement. – Path planning problems Use distance “as the crow flies”

Fifteen Puzzle Random 15-puzzle instances first solved optimally by IDA* using Manhattan distance (Korf, 1985) Optimal solution lengths average 53 moves 400 million nodes generated on average Twenty-four puzzle: a significantly harder problem. How to solve it? [This slide and much of the following adapted from Rich Korf]

Limitations of Manhattan Distance In fact, tiles interfere with each other Account for these interactions as efficiently as possible

Linear Conflict 3113

1 3 13

1 3 13

1 3 13

1 3 13

1313 Linear Conflict adds two moves to Manhattan Distance (Hanson, Mayer, and Yung, 1991)

More Complex Tile Interactions MD is 20, but 28 moves are needed. (Counting all tile moves.)

Pattern Database Heuristics Culberson and Schaeffer, 1996 A pattern database is a complete set of such positions, with associated number of moves. e.g. a 7-tile pattern database for the Fifteen Puzzle contains 519 million entries.

Additive Pattern Databases If no tile belongs to more than one pattern, then we can add their heuristic values. Manhattan distance is a special case of this, where each pattern contains a single tile.

Example Additive Databases The 7-tile database contains 58 million entries. The 8-tile database contains 519 million entries Now counting only the moves of the named tiles.

Computing the Heuristic moves needed to solve red tiles 25 moves needed to solve blue tiles Overall heuristic is sum, or 20+25=45 moves

Heuristic Design Exercises (do these at home!)

Heuristic Design Exercises Jug-A holds 4 gallons, and Jug-B holds 3 gallons. Both initially full. Goal: measure exactly 1 gallon. Possible actions: Fill Jug-A, Filling Jug-B, Empty Jug-A onto the ground, Empty Jug-B onto the ground, Pouring Jug-A into Jug-B (until either Jug-A is empty or Jug-B is full), Pour Jug-B into Jug-A (until either Jug-B is empty or Jug-A is full).