CS 343H: Artificial Intelligence

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed Search Algorithms
Informed search strategies
Informed search algorithms
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Slide 1 Heuristic Search: BestFS and A * Jim Little UBC CS 322 – Search 5 September 19, 2014 Textbook § 3.6.
Announcements Homework 1: Search Project 1: Search Sections
Review: Search problem formulation
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Informed Search Idea: be smart about what paths to try.
Computer Science CPSC 322 Lecture Heuristic Search (Ch: 3.6, 3.6.1) Slide 1.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search: A* 1 CPSC 322 – Search 4 January 19, 2011 Textbook §3.6 Taught by: Vasanth.
Advanced Artificial Intelligence Lecture 2: Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Warm-up: Tree Search Solution
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CS 343H: Artificial Intelligence
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Review: Tree search Initialize the frontier using the starting state
Announcements Homework 1 Questions 1-3 posted. Will post remainder of assignment over the weekend; will announce on Slack. No AI Seminar today.
CS 188: Artificial Intelligence
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Announcements Homework 1: Search Project 1: Search
Warm-up: DFS Graph Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
HW 1: Warmup Missionaries and Cannibals
CS 188: Artificial Intelligence Spring 2006
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
Informed Search.
Presentation transcript:

CS 343H: Artificial Intelligence Week 2b: Informed Search

Announcements Are you registered in the course? Reading responses good overall PS1 posted, due in 2 weeks This one will take more time Next Tuesday: in-class review and PS1 help Come prepared with questions on PS1!

Quiz : review Which are true about DFS? (b is branching factor, m is depth of search tree.) At any given time during the search, the number of nodes on the fringe can be no larger than bm. At any given time in the search, the number of nodes on the fringe can be as large as b^m. The number of nodes expanded throughout the entire search can be no larger than bm. The number of nodes expanded throughout the entire search can be as large as b^m.

Quiz : review Which are true about BFS? (b is the branching factor, s is the depth of the shallowest solution) At any given time during the search, the number of nodes on the fringe can be no larger than bs. At any given time during the search, the number of nodes on the fringe can be as large as b^s. The number of nodes considered throughout the entire search can be no larger than bs. The number of nodes considered throughout the entire search can be as large as b^s.

Quiz: search execution Run DFS, BFS, and UCS:

Today Informed search Heuristics Greedy search A* search Graph search

Recap: Search Search problem: Search tree: Search algorithm: States (configurations of the world) Actions and costs Successor function: a function from states to lists of (state, action, cost) triples (world dynamics) Start state and goal test Search tree: Nodes: represent plans for reaching states Plans have costs (sum of action costs) Search algorithm: Systematically builds a search tree Chooses an ordering of the fringe (unexplored nodes) Optimal: finds least-cost plans

Example: Pancake Problem Cost: Number of pancakes flipped

Example: Pancake Problem State space graph with costs as weights 4 2 3 2 3 4 3 4 2 3 2 2 4 3

General Tree Search Action: flip top two Cost: 2 Action: flip all four Cost: 4 Path to reach goal: Flip four, flip three Total cost: 7

The one queue All these search algorithms are the same except for fringe strategies Conceptually, all fringes are priority queues (i.e., collections of nodes with attached priorities) Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue with stacks and queues Can even code one implementation that takes a variable queuing object

Recall: Uniform Cost Search Strategy: expand lowest path cost The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location … c  1 c  2 c  3 Start Goal [demo: countours UCS]

Example: Uniform cost search

Another example

Informed search: main idea

Search heuristic A heuristic is: A function that estimates how close a state is to a goal Designed for a particular search problem 10 5 11.2

Example: Heuristic Function h(x)

Example: Heuristic Function Heuristic: the largest pancake that is still out of place 4 3 2 h(x)

How to use the heuristic? What about following the “arrow” of the heuristic?.... Greedy search

Example: Heuristic Function h(x)

Best First / Greedy Search Expand the node that seems closest… What can go wrong?

Greedy search Strategy: expand a node that you think is closest to a goal state Heuristic: estimate of distance to nearest goal for each state A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS b … b …

Example 1: greedy search

Example 2: greedy search

Quiz: greedy search Which solution would greedy search find if run on this graph?

Enter: A* search

Combining UCS and Greedy Uniform-cost orders by path cost, or backward cost g(n) Greedy orders by goal proximity, or forward cost h(n) A* Search orders by the sum: f(n) = g(n) + h(n) 5 e h=1 1 1 3 2 S a d G h=6 h=5 1 h=2 h=0 1 c b h=7 h=6 Example: Teg Grenager

When should A* terminate? Should we stop when we enqueue a goal? No: only stop when we dequeue a goal A 2 2 h = 2 G S h = 3 h = 0 B 3 2 h = 1

Quiz: A* Tree search

Quiz: A* Tree search When running A*, the first node expanded is S (like all search algorithms). After expanding S, two nodes will be on the fringe: S->A and S->D. Fill in: g((S->A)) = h((S->A)) = f((S->A)) = g((S->D))= h((S->D)) = f((S->D)) = Which node will be expanded next?

Is A* Optimal? 1 A 3 S G 5 What went wrong? Actual bad goal cost < estimated good goal cost We need estimates to be less than actual costs!

Admissible (optimistic): Idea: admissibility Inadmissible (pessimistic): break optimality by trapping good plans on the fringe Admissible (optimistic): slows down bad plans but never outweigh true costs

Admissible Heuristics A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal Examples: Coming up with admissible heuristics is most of what’s involved in using A* in practice. 15 4

Claim: A will exit the fringe before B. Optimality of A* Notation: g(n) = cost to node n h(n) = estimated cost from n to the nearest goal (heuristic) f(n) = g(n) + h(n) = estimated total cost via n A: a lowest cost goal node B: another goal node … A B Claim: A will exit the fringe before B.

Claim: A will exit the fringe before B. Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) … A B f(n) = g(n) + h(n) // by definition f(n) <= g(A) // by admissibility of h g(A) = f(A) // because h=0 at goal

Claim: A will exit the fringe before B. Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) … A B g(A) < g(B) // B is suboptimal f(A) < f(B) // h=0 at goals

Claim: A will exit the fringe before B. Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) n will expand before B … A B f(n) <= f(A) < f(B) // from above f(n) < f(B)

Claim: A will exit the fringe before B. Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) n will expand before B All ancestors of A expand before B A expands before B … A B

Properties of A* Uniform-Cost A* b b … …

UCS vs A* Contours Uniform-cost expands equally in all directions A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal

Recall: greedy

Uniform cost search

A*

Example: UCS vs. A*

Quiz: A* tree search

Quiz

A* applications Pathing / routing problems Video games Resource planning problems Robot motion planning Language analysis Machine translation Speech recognition …

Creating Admissible Heuristics Most of the work in solving hard search problems optimally is in coming up with admissible heuristics Often, admissible heuristics are solutions to relaxed problems, where new actions are available Inadmissible heuristics are often useful too (why?) 15 366

Example: 8 Puzzle What are the states? How many states? What are the actions? What states can I reach from the start state? What should the costs be?

8 Puzzle I Heuristic: Number of tiles misplaced Why is it admissible? h(start) = This is a relaxed-problem heuristic Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps UCS 112 6,300 3.6 x 106 TILES 13 39 227 8

8 Puzzle II What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Why admissible? h(start) = Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps TILES 13 39 227 MANHATTAN 12 25 73 3 + 1 + 2 + … = 18

8 Puzzle II What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Why admissible? h(start) = Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps TILES 13 39 227 MANHATTAN 12 25 73 3 + 1 + 2 + … = 18

Quiz 1 Consider the problem of Pacman eating all dots in a maze. Every movement action has a cost of 1. Select all heuristics that are admissible, if any. the total number of dots left the distance to the closest dot the distance to the furthest dot the distance to the closest dot plus distance to the furthest dot none of the above

Quiz 2 Consider the pancake flipping problem described earlier in the lecture. Recall that we can put the spatula anywhere in the stack and flip the group of pancakes above the spatula. Rather than having the cost be equal to the number of flipped pancakes, let the cost be 1 for every flip, regardless of how many pancakes are flipped. Select all heuristics that are admissible, if any. 1 if at least one pancake is out of place, 0 otherwise the number of pancakes that are out of place none of the above

8 Puzzle III How about using the actual cost as a heuristic? Would it be admissible? Would we save on nodes expanded? What’s wrong with it? With A*: a trade-off between quality of estimate and work per node!

Tree Search: Extra Work! Failure to detect repeated states can cause exponentially more work. Why? State graph Search tree

Graph Search In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p c e h f r q G

Graph Search Idea: never expand a state twice How to implement: Tree search + set of expanded states (“closed set”) Expand the search tree node-by-node, but… Before expanding a node, check to make sure its state is new If not new, skip it Important: store the closed set as a set, not a list Can graph search wreck completeness? Why/why not? How about optimality? Warning: 3e book has a more complex, but also correct, variant

A* Graph Search Gone Wrong? State space graph Search tree A S (0+2) 1 A (1+4) B (1+1) S 1 C h=2 h=1 h=4 h=0 1 C (2+1) C (3+1) 2 B 3 G (5+0) G (6+0) G

Consistency of Heuristics Admissibility: heuristic cost <= actual cost to goal h(A) <= actual cost from A to G A 1 C h=4 3 G

Consistency of Heuristics Stronger than admissibility Definition: heuristic cost <= actual cost per arc h(A) - h(C) <= cost(A to C) Consequences: The f value along a path never decreases A* graph search is optimal A 1 C h=4 h=2 h=1

Optimality Tree search: Graph search: A* is optimal if heuristic is admissible (and non-negative) UCS is a special case (h = 0) Graph search: A* optimal if heuristic is consistent UCS optimal (h = 0 is consistent) Consistency implies admissibility In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems

Quiz: consistency Run A* graph search

Trivial Heuristics, Dominance Dominance: ha ≥ hc if Heuristics form a semi-lattice: Max of admissible heuristics is admissible Trivial heuristics Bottom of lattice is the zero heuristic (what does this give us?) Top of lattice is the exact heuristic

Quiz True or false: “If h1 is admissible and h2 is admissible, then 0.5*h1 + 0.5*h2 is admissible.” Both the max and the min of admissible heuristics are admissible. Which of the following statements is true? The max of multiple admissible heuristics dominates the min of the same heuristics. The min of multiple admissible heuristics dominates the max of the same heuristics.

Summary: A* A* uses both backward costs and (estimates of) forward costs A* is optimal with admissible / consistent heuristics Heuristic design is key: often use relaxed problems

Optimality of A* Graph Search Sketch: Consider what A* does with a consistent heuristic: Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours) Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally. Result: A* graph search is optimal.

Optimality of A* Graph Search Proof: New possible problem: some n on path to G* isn’t in queue when we need it, because some worse n’ for the same state dequeued and expanded first (disaster!) Take the highest such n in tree Let p be the ancestor of n that was on the queue when n’ was popped f(p) < f(n) because of consistency f(n) < f(n’) because n’ is suboptimal p would have been expanded before n’ Contradiction!

Graph Search Very simple fix: never expand a state twice Can this wreck completeness? Optimality?

Graph Search, Reconsidered Idea: never expand a state twice How to implement: Tree search + list of expanded states (closed list) Expand the search tree node-by-node, but… Before expanding a node, check to make sure its state is new Python trick: store the closed list as a set, not a list Can graph search wreck completeness? Why/why not? How about optimality?

Optimality of A* Graph Search Consider what A* does: Expands nodes in increasing total f value (f-contours) Proof idea: optimal goals have lower f value, so get expanded first We’re making a stronger assumption than in the last proof… What?

Optimality of A* Graph Search Consider what A* does: Expands nodes in increasing total f value (f-contours) Reminder: f(n) = g(n) + h(n) = cost to n + heuristic Proof idea: the optimal goal(s) have the lowest f value, so it must get expanded first … f  1 There’s a problem with this argument. What are we assuming is true? f  2 f  3

Course Scheduling From the university’s perspective: Set of courses {c1, c2, … cn} Set of room / times {r1, r2, … rn} Each pairing (ck, rm) has a cost wkm What’s the best assignment of courses to rooms? States: list of pairings Actions: add a legal pairing Costs: cost of the new pairing Admissible heuristics?

Consistency Wait, how do we know parents have better f-vales than their successors? Couldn’t we pop some node n, and find its child n’ to have lower f value? YES: What can we require to prevent these inversions? Consistency: Real cost must always exceed reduction in heuristic Like admissibility, but better! B h = 0 h = 8 3 g = 10 G A h = 10