CS 188: Artificial Intelligence Spring 2006

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed search algorithms
An Introduction to Artificial Intelligence
Solving Problem by Searching
Announcements Homework 1: Search Project 1: Search Sections
Problem Solving by Searching
Review: Search problem formulation
Artificial Intelligence
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CSC344: AI for Games Lecture 4: Informed search
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Informed search algorithms
CS 188: Artificial Intelligence Fall 2007 Lecture 6: Robot Motion Planning 9/13/2007 Dan Klein – UC Berkeley Many slides over the course adapted from either.
CS 188: Artificial Intelligence Fall 2006 Lecture 5: Robot Motion Planning 9/14/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Advanced Artificial Intelligence Lecture 2: Search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Warm-up: Tree Search Solution
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CS 188: Artificial Intelligence Fall 2006 Lecture 5: CSPs II 9/12/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CS 188: Artificial Intelligence Spring 2006 Lecture 5: Robot Motion Planning 1/31/2006 Dan Klein – UC Berkeley Many slides from either Stuart Russell or.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Review: Tree search Initialize the frontier using the starting state
Announcements Homework 1 Questions 1-3 posted. Will post remainder of assignment over the weekend; will announce on Slack. No AI Seminar today.
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
CS 188: Artificial Intelligence Fall 2007
Artificial Intelligence Problem solving by searching CSC 361
CS 188: Artificial Intelligence Spring 2007
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
Informed search algorithms
CS 188: Artificial Intelligence Spring 2006
Informed search algorithms
Announcements Homework 1: Search Project 1: Search
Warm-up: DFS Graph Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
CS 188: Artificial Intelligence Fall 2007
CS 416 Artificial Intelligence
Presentation transcript:

CS 188: Artificial Intelligence Spring 2006 Lecture 4: A* Search (and Friends) 1/26/2006 Dan Klein – UC Berkeley Many slides from either Stuart Russell or Andrew Moore

Announcements Final Exam Issues Sections: starting 1/30 101 (M 10am): B0051 Hildebrand 104 (M 4pm): 002 Evans

Questions Why are we reviewing DFS / BFS, etc? Is uniform-cost search Dijkstra’s algorithm?

Today A* Search Heuristic Design Local Search

Problem Graphs vs Search Trees d b p q c e h a f r Each NODE in in the search tree is an entire PATH in the problem graph. S a b d p c e h f r q G We almost always construct both on demand – and we construct as little as possible.

Uniform Cost Problems Remember: explores increasing cost contours The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location … c  1 c  2 c  3 Start Goal

Best-First Search d b p q c e h a f r 2 9 8 1 3 5 4 15 h=0 h=8 h=5 h=4 START GOAL d b p q c e h a f r 2 9 8 1 3 5 4 15 h=0 h=8 h=5 h=4 h=11 h=8 h=4 h=6 h=12 h=11 h=6 h=9

Best-First Search A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS in the worst case Can explore everything Can get stuck in loops if no cycle checking Like DFS in completeness (finite states w/ cycle checking) b … b …

Combining Best-First and UCS Uniform-cost orders by path cost, or backward cost g(n) Best-first orders by goal proximity, or forward cost h(n) What happens with each ordering? A* orders by the sum: f(n) = g(n) + h(n) 4 2 1 1 2 S A B C G h=4 h=3 h=2 h=1 h=0

When should A* terminate? Should we stop when we enqueue a goal? No: only stop when we dequeue a goal A 2 2 h = 2 G S h = 3 h = 0 B 3 2 h = 1

Is A* Optimal? 1 A 3 S G 5 What went wrong? Actual bad goal cost > estimated good goal cost We need estimates to be less than actual costs!

Admissible Heuristics A heuristic is admissible (optimistic) if: where is the true cost to a nearest goal E.g. Euclidean distance on a map problem What is f(n) for a goal node? Coming up with admissible heuristics is most of what’s involved in using A* in practice.

Optimality of A*: Blocking This proof assumed tree search! Where? Proof: What can go wrong? We’d have to have to pop a suboptimal goal off the fringe queue Imagine a suboptimal goal G’ is on the queue Consider any unexpanded (fringe) node n on a shortest path to optimal G n will be popped before G …

Optimality of A*: Contours Consider what A* does: Expands nodes in increasing total f value (f-contours) Optimal goals have lower f value, so get expanded first Holds for graph search as well, but we made a different assumption. What?

Consistency Wait, how do we know we expand in increasing f value? Couldn’t we pop some node n, and find its child n’ to have higher f value? YES: What do we need to do to fix this? Consistency: Real cost always exceeds reduction in heuristic B h = 0 h = 8 3 g = 10 G A h = 10

UCS vs A* Contours Uniform-cost expanded in all directions A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal

Properties of A* Uniform-Cost A* Algorithm Complete Optimal Time Space UCS = BFS* A* Y Y O(s bs)* O(bs)* Y Y O(a ba) O(ba) *Assume all costs are 1 Assume one goal, non-goals have h(n) = g*(G) - a Uniform-Cost A* b b a tiers … … s tiers

Admissible Heuristics Most of the work is in coming up with admissible heuristics Good news: usually admissible heuristics are also consistent More good news: inadmissible heuristics are often quite effective (especially when you have no choice)

8-Puzzle I Number of tiles misplaced? Why is it admissible? h(start) = This is a relaxed-problem heuristic Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps ID 112 6,300 3.6 x 106 TILES 13 39 227 8

8-Puzzle II What if we had an easier 8-puzzle where any tile could slide any one direction at any time? Total Manhattan distance Why admissible? h(start) = Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps TILES 13 39 227 MAN-HATTAN 12 25 73 3 + 1 + 2 + … = 18

8-Puzzle III How about using the actual cost as a heuristic? Would it be admissible? Would we save on nodes? What’s wrong with it? With A*, trade-off between quality of estimate and work per node!

Trivial Heuristics, Dominance Heuristics form a (semi-)lattice: Max of admissible heuristics is admissible Trivial heuristics Bottom of lattice is the zero heuristic (what does this give us?) Top of lattice is the exact heuristic

Course Scheduling From the university’s perspective: Set of courses {c1, c2, … cn} Set of room / times {r1, r2, … rn} Each pairing (ck, rm) has a cost wkm What’s the best assignment of courses to rooms? States: list of pairings Actions: add a legal pairing Costs: cost of the new pairing Admissible heuristics? (Who can think of a cs170 answer to this problem?)

Other A* Applications Machine translation Statistical parsing Speech recognition Robot motion planning (next class) Routing problems (see homework!) Planning problems (see homework!) …

Summary: A* A* uses both backward costs and (estimates of) forward costs A* is optimal with admissible heuristics Heuristic design is key: often use relaxed problems

Local Search Methods Queue-based algorithms keep fallback options (backtracking) Local search: improve what you have until you can’t make it better Generally much more efficient (but incomplete)

Types of Problems Planning problems: Identification problems: We want a path to a solution (examples?) Usually want an optimal path Incremental formulations Identification problems: We actually just want to know what the goal is (examples?) Usually want an optimal goal Complete-state formulations Iterative improvement algorithms

Example: N-Queens Start wherever, move queens to reduce conflicts Almost always solves large n-queens nearly instantly How is this different from best-first search?

Hill Climbing Simple, general idea: Why can this be a terrible idea? Start wherever Always choose the best neighbor If no neighbors have better scores than current, quit Why can this be a terrible idea? Complete? Optimal? What’s good about it?

Hill Climbing Diagram Random restarts? Random sideways steps?

Simulated Annealing Idea: Escape local maxima by allowing downhill moves But make them rarer as time goes on

Simulated Annealing Theoretical guarantee: Stationary distribution: If T decreased slowly enough, will converge to optimal state! Is this an interesting guarantee? Sounds like magic, but reality is reality: The more downhill steps you need to escape, the less likely you are to every make them all in a row People think hard about ridge operators which let you jump around the space in better ways

Beam Search Like greedy search, but keep K states at all times: Variables: beam size, encourage diversity? The best choice in MANY practical settings Complete? Optimal? Why do we still need optimal methods? Greedy Search Beam Search

Genetic Algorithms Genetic algorithms use a natural selection metaphor Like beam search (selection), but also have pairwise crossover operators, with optional mutation Probably the most misunderstood, misapplied (and even maligned) technique around!

Example: N-Queens Why does crossover make sense here? When wouldn’t it make sense? What would mutation be? What would a good fitness function be?

Continuous Problems Placing airports in Romania States: (x1,y1,x2,y2,x3,y3) Cost: sum of squared distances to closest city

Gradient Methods How to deal with continous (therefore infinite) state spaces? Discretization: bucket ranges of values E.g. force integral coordinates Continuous optimization E.g. gradient ascent More on this next class… Image from vias.org

Next Class Robot motion planning!