Heuristic Search Thank you Michael L. Littman, Princeton University for sharing these slides.

Slides:



Advertisements
Similar presentations
Chapter 4: Informed Heuristic Search
Advertisements

Heuristic Search The search techniques we have seen so far...
Informed search algorithms
Review: Search problem formulation
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Constraint Satisfaction Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Heuristics Some further elaborations of the art of heuristics and examples.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Heuristic Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
Search Introduction to Artificial Intelligence COS302
Informed Search Idea: be smart about what paths to try.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
COMP261 Lecture 7 A* Search. A* search Can we do better than Dijkstra's algorithm? Yes! –want to explore more promising paths, not just shortest so far.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Functions.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Heuristic Search Introduction to Artificial Intelligence
CSE (c) S. Tanimoto, 2002 Search Algorithms
Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
Games with Chance Other Search Algorithms
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Artificial Intelligence
Searching for Solutions
Informed search algorithms
Informed search algorithms
Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Iterative Deepening and Branch & Bound
State-Space Searches.
Heuristic Search Generate and Test Hill Climbing Best First Search
State-Space Searches.
The Rich/Knight Implementation
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
CSE (c) S. Tanimoto, 2004 Search Algorithms
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
The Rich/Knight Implementation
Presentation transcript:

Heuristic Search Thank you Michael L. Littman, Princeton University for sharing these slides.

AI in the News “Computers double their performance every 18 months. So the danger is real that they could develop intelligence and take over the world.” Eminent physicist Stephen Hawking, advocating genetic engineering as a way to stay ahead of artificial intelligence. [Newsweek, 9/17/01]

Blind Search Last time we discussed BFS and DFS and talked a bit about how to choose the right algorithm for a given search problem. But, if we know about the problem we are solving, we can be even cleverer…

Revised Template fringe = {(s0, f(s0)}; /* initial cost */ markvisited(s0); While (1) { If empty(fringe), return failure; (s, c) = removemincost(fringe); If G(s) return s; Foreach s’ in N(s) if s’ in fringe, reduce cost if f(s’) smaller; else if unvisited(s’) fringe U= {(s’, f(s’)}; markvisited(s’); }

Cost as True Distance 4 3 5 2 1 4 3 5 2 1 G 4 3 5 2 1 G 4 3 5 2 4 G G 4 3 5 2 1 4 3 5 2 1 G 4 3 5 2 1 G 4 3 5 2 4 G G 4 3 5

Some Notation Minimum (true) distance to goal t(s) Estimated cost during search f(s) Steps from start state g(s) Estimated distance to goal (heuristic) h(s)

Compare to Optimal Recall b is branching factor, d is depth of goal (d=t(s0)) Using true distances as costs in the search algorithm (f(s)=t(s)), how long is the path discovered? How many states get visited during search?

Greedy True distance would be ideal. Hard to achieve. What if we use some function h(s) that approximates t(s)? f(s) = h(s): expand closest node first.

Approximate Distances We saw already that if the approximation is perfect, the search is perfect. What if costs are +/- 1 of the true distance? |h(s)-t(s)|  1

Problem with Greedy Four criteria? 2 1 …

Algorithm A Discourage wandering off: f(s) = g(s)+h(s) In words: Estimate of total path cost: cost so far plus estimated completion cost Search along the most promising path (not node)

A Behaves Better Only wanders a little 3 2 2 1 … 3 2 3 4 5 3

A Theorem If h(s)  t(s) + k (overestimate bound), then the path found by A is no more than k longer than optimal.

A Proof f(goal) is length of found path. All nodes on optimal path have f(s) = g(s) + h(s)  g(s) + t(s) + k = optimal path + k

How Insure Optimality? Let k=0! That is, heuristic h(s) must always underestimate the distance to the goal (be optimistic). Such an h(s) is called an “admissible heuristic”. A with an admissible heuristic is known as A*. (Trumpets sound!)

A* Example 4 5 6 4 5 6 G 5 4 6 G 5 4 6 G 5 4 6 4 G

Time Complexity Optimally efficient for a given heuristic function: No other complete search algorithm would expand fewer nodes. Even perfect evaluation function could be O(bd), though. When? Still, more accurate better!

Simple Maze 4 G 5 6 7 4 5 6 7 4 G 5 6 4 G 5 6 S G 4 G

Relaxation in Maze Move from (x,y) to (x’,y’) is illegal If |x-x’| > 1 Or |y-y’| > 1 Or (x’,y’) contains a wall Otherwise, it’s legal.

Relaxations Admissible Why does this work? Any legal path in the full problem is still legal in the relaxation. Therefore, the optimal solution to the relaxation must be no longer than the optimal solution to the full problem.

Relaxation in 8-puzzle Tile move from (x,y) to (x’,y’) is illegal If |x-x’| > 1 or |y-y’| > 1 Or (|x-x’|  0 and |y-y’|  0) Or (x’,y’) contains a tile Otherwise, it’s legal.

Two 8-puzzle Heuristics h1(s): total tiles out of place h2(s): total Manhattan distance Note: h1(s)  h2(s), so the latter leads to more efficient search Easy to compute and provides useful guidance

Knapsack Example Optimize value, budget: $10B. Mark. cost value NY 6 8 LA 5 8 Dallas 3 5 Atl 3 5 Bos 3 4

Knapsack Heuristic State: Which markets to include, exclude (some undecided). Heuristic: Consider including markets in order of value/cost. If cost goes over budget, compute value of “fractional” purchase. Fractional relaxation.

Memory Bounded Just as iterative deepening gives a more memory efficient version of BFS, can define IDA* as a more memory efficient version of A*. Just use DFS with a cutoff on f values. Repeat with larger cutoff until solution found.

What to Learn The A* algorithm: its definition and behavior (finds optimal). How to create admissible heuristics via relaxation.

Homework 2 (partial) Consider the heuristic for Rush Hour of counting the cars blocking the ice cream truck and adding one. (a) Show this is a relaxation by giving conditions for an illegal move and showing what was eliminated. (b) For the board on the next page, show an optimal sequence of boards en route to the goal. Label each board with the f value from the heuristic. Describe an improved heuristic for Rush Hour. (a) Explain why it is admissible. (b) Is it a relaxation? (c) Label the boards from 1b with the f values from your heuristic.

Rush Hour Example