CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed Search Algorithms
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Announcements Homework 1: Search Project 1: Search Sections
CS 343H: Artificial Intelligence
Review: Search problem formulation
Problem Solving and Search in AI Heuristic Search
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Informed Search Idea: be smart about what paths to try.
CSE 511a: Artificial Intelligence Spring 2012
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
CSE 473: Artificial Intelligence Spring 2012
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Search (cont) & CSP Intro Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore,
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
Advanced Artificial Intelligence Lecture 2: Search.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Warm-up: Tree Search Solution
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Review: Tree search Initialize the frontier using the starting state
Announcements Homework 1 Questions 1-3 posted. Will post remainder of assignment over the weekend; will announce on Slack. No AI Seminar today.
Last time: Problem-Solving
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
CS 188: Artificial Intelligence Spring 2007
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Announcements Homework 1: Search Project 1: Search
Warm-up: DFS Graph Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
HW 1: Warmup Missionaries and Cannibals
CS 188: Artificial Intelligence Spring 2006
HW 1: Warmup Missionaries and Cannibals
Artificial Intelligence
CS 416 Artificial Intelligence
Informed Search.
Presentation transcript:

CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore

Announcements  Projects:  Project 1 (Search) is out, due next Monday 9/14  You don’t need to submit answers the project’s discussion questions  Use Python 2.5 (on EECS instructional machines), 2.6 interpreter is backwards compatible, so also OK  5 slip days for projects; up to two per deadline  Try pair programming, not divide-and-conquer  Newsgroup:  Seems like WebNews wants your original password (inst is aware), or use Thunderbird, etc.

Today  A* Search  Heuristic Design

Recap: Search  Search problem:  States (configurations of the world)  Successor function: a function from states to lists of (state, action, cost) triples; drawn as a graph  Start state and goal test  Search tree:  Nodes: represent plans for reaching states  Plans have costs (sum of action costs)  Search Algorithm:  Systematically builds a search tree  Chooses an ordering of the fringe (unexplored nodes)

Example: Pancake Problem Cost: Number of pancakes flipped

Example: Pancake Problem State space graph with costs as weights

General Tree Search Action: flip top two Cost: 2 Action: flip all four Cost: 4 Path to reach goal: Flip four, flip three Total cost: 7

Uniform Cost Search  Strategy: expand lowest path cost  The good: UCS is complete and optimal!  The bad:  Explores options in every “direction”  No information about goal location Start Goal … c  3 c  2 c  1 [demo: countours UCS]

Example: Heuristic Function Heuristic: the largest pancake that is still out of place h(x)

Best First (Greedy)  Strategy: expand a node that you think is closest to a goal state  Heuristic: estimate of distance to nearest goal for each state  A common case:  Best-first takes you straight to the (wrong) goal  Worst-case: like a badly- guided DFS … b … b [demo: countours greedy]

Example: Heuristic Function h(x)

Combining UCS and Greedy  Uniform-cost orders by path cost, or backward cost g(n)  Best-first orders by goal proximity, or forward cost h(n)  A* Search orders by the sum: f(n) = g(n) + h(n) Sad b G h=5 h=6 h= h=6 h=0 c h=7 3 e h=1 1 Example: Teg Grenager

 Should we stop when we enqueue a goal?  No: only stop when we dequeue a goal When should A* terminate? S B A G h = 1 h = 2 h = 0 h = 3

Is A* Optimal? A G S 1 3 h = 6 h = 0 5 h = 7  What went wrong?  Actual bad goal cost < estimated good goal cost  We need estimates to be less than actual costs!

Admissible Heuristics  A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal  Examples:  Coming up with admissible heuristics is most of what’s involved in using A* in practice. 4 15

Optimality of A*: Blocking … Notation:  g(n) = cost to node n  h(n) = estimated cost from n to the nearest goal (heuristic)  f(n) = g(n) + h(n) = estimated total cost via n  G*: a lowest cost goal node  G: another goal node

Optimality of A*: Blocking Proof:  What could go wrong?  We’d have to have to pop a suboptimal goal G off the fringe before G*  This can’t happen:  Imagine a suboptimal goal G is on the queue  Some node n which is a subpath of G* must also be on the fringe (why?)  n will be popped before G …

Properties of A* … b … b Uniform-CostA*

UCS vs A* Contours  Uniform-cost expanded in all directions  A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal [demo: countours UCS / A*]

Creating Admissible Heuristics  Most of the work in solving hard search problems optimally is in coming up with admissible heuristics  Often, admissible heuristics are solutions to relaxed problems, where new actions are available  Inadmissible heuristics are often useful too (why?) 4 15

Example: 8 Puzzle  What are the states?  How many states?  What are the actions?  What states can I reach from the start state?  What should the costs be?

8 Puzzle I  Heuristic: Number of tiles misplaced  Why is it admissible?  h(start) =  This is a relaxed- problem heuristic 8 Average nodes expanded when optimal path has length… …4 steps…8 steps…12 steps UCS1126, x 10 6 TILES

8 Puzzle II  What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles?  Total Manhattan distance  Why admissible?  h(start) = … = 18 Average nodes expanded when optimal path has length… …4 steps…8 steps…12 steps TILES MANHATTAN

8 Puzzle III  How about using the actual cost as a heuristic?  Would it be admissible?  Would we save on nodes expanded?  What’s wrong with it?  With A*: a trade-off between quality of estimate and work per node!

Trivial Heuristics, Dominance  Dominance: h a ≥ h c if  Heuristics form a semi-lattice:  Max of admissible heuristics is admissible  Trivial heuristics  Bottom of lattice is the zero heuristic (what does this give us?)  Top of lattice is the exact heuristic

Other A* Applications  Pathing / routing problems  Resource planning problems  Robot motion planning  Language analysis  Machine translation  Speech recognition  … [demo: plan tiny UCS / A*]

Tree Search: Extra Work!  Failure to detect repeated states can cause exponentially more work. Why?

Graph Search  In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p a c e p h f r q qc G a q e p h f r q qc G a

Graph Search  Idea: never expand a state twice  How to implement:  Tree search + list of expanded states (closed list)  Expand the search tree node-by-node, but…  Before expanding a node, check to make sure its state is new  Python trick: store the closed list as a set, not a list  Can graph search wreck completeness? Why/why not?  How about optimality?

Graph Search  Very simple fix: never expand a state twice  Can this wreck completeness? Optimality?

Optimality of A* Graph Search Proof:  New possible problem: nodes on path to G* that would have been in queue aren’t, because some worse n’ for the same state as some n was dequeued and expanded first (disaster!)  Take the highest such n in tree  Let p be the ancestor which was on the queue when n’ was expanded  Assume f(p) < f(n)  f(n) < f(n’) because n’ is suboptimal  p would have been expanded before n’  So n would have been expanded before n’, too  Contradiction!

Consistency  Wait, how do we know parents have better f-vales than their successors?  Couldn’t we pop some node n, and find its child n’ to have lower f value?  YES:  What can we require to prevent these inversions?  Consistency:  Real cost must always exceed reduction in heuristic A B G 3 h = 0 h = 10 g = 10 h = 8

Optimality  Tree search:  A* optimal if heuristic is admissible (and non- negative)  UCS is a special case (h = 0)  Graph search:  A* optimal if heuristic is consistent  UCS optimal (h = 0 is consistent)  Consistency implies admissibility  In general, natural admissible heuristics tend to be consistent

Summary: A*  A* uses both backward costs and (estimates of) forward costs  A* is optimal with admissible heuristics  Heuristic design is key: often use relaxed problems

Mazeworld Demos

Graph Search, Reconsidered  Idea: never expand a state twice  How to implement:  Tree search + list of expanded states (closed list)  Expand the search tree node-by-node, but…  Before expanding a node, check to make sure its state is new  Python trick: store the closed list as a set, not a list  Can graph search wreck completeness? Why/why not?  How about optimality?

Optimality of A* Graph Search  Consider what A* does:  Expands nodes in increasing total f value (f-contours)  Proof idea: optimal goals have lower f value, so get expanded first We’re making a stronger assumption than in the last proof… What?

Optimality of A* Graph Search  Consider what A* does:  Expands nodes in increasing total f value (f-contours) Reminder: f(n) = g(n) + h(n) = cost to n + heuristic  Proof idea: the optimal goal(s) have the lowest f value, so it must get expanded first … f  3 f  2 f  1 There’s a problem with this argument. What are we assuming is true?

Course Scheduling  From the university’s perspective:  Set of courses {c 1, c 2, … c n }  Set of room / times {r 1, r 2, … r n }  Each pairing (c k, r m ) has a cost w km  What’s the best assignment of courses to rooms?  States: list of pairings  Actions: add a legal pairing  Costs: cost of the new pairing  Admissible heuristics?