Iterative Deepening and Branch & Bound

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

An Introduction to Artificial Intelligence
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.4) January, 14, 2009.
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
Slide 1 Heuristic Search: BestFS and A * Jim Little UBC CS 322 – Search 5 September 19, 2014 Textbook § 3.6.
Slide 1 Search: Advanced Topics Jim Little UBC CS 322 – Search 5 September 22, 2014 Textbook § 3.6.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 22, 2010.
Review: Search problem formulation
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
CPSC 322, Lecture 8Slide 1 Heuristic Search: BestFS and A * Computer Science cpsc322, Lecture 8 (Textbook Chpt 3.5) January, 21, 2009.
Uninformed Search (cont.)
Multiple Path Pruning, Iterative Deepening CPSC 322 – Search 7 Textbook § January 26, 2011.
Computer Science CPSC 322 Lecture Heuristic Search (Ch: 3.6, 3.6.1) Slide 1.
Computer Science CPSC 322 Lecture A* and Search Refinements (Ch 3.6.1, 3.7.1, 3.7.2) Slide 1.
Branch & Bound, CSP: Intro CPSC 322 – CSP 1 Textbook § & January 28, 2011.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Lecture 3: Uninformed Search
Heuristic Search: A* 1 CPSC 322 – Search 4 January 19, 2011 Textbook §3.6 Taught by: Vasanth.
Computer Science CPSC 322 Lecture 6 Iterative Deepening and Search with Costs (Ch: 3.7.3, 3.5.3)
CPSC 322, Lecture 6Slide 1 Uniformed Search (cont.) Computer Science cpsc322, Lecture 6 (Textbook finish 3.5) Sept, 17, 2012.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
Computer Science CPSC 322 Lecture 6 Analysis of A*, Search Refinements (Ch ) Slide 1.
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Computer Science CPSC 322 Lecture 5 Heuristic Search and A* (Ch: 3.6, 3.6.1) Slide 1.
Lecture 3: Uninformed Search
Computer Science cpsc322, Lecture 7
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Search: Advanced Topics Computer Science cpsc322, Lecture 9
Branch & Bound, Summary of Advanced Topics
Last time: Problem-Solving
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Department of Computer Science
CSCE 580 Artificial Intelligence Ch.3 [P]: Searching
Informed Search and Exploration
Artificial Intelligence Problem solving by searching CSC 361
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Search: Advanced Topics Computer Science cpsc322, Lecture 9
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Computer Science cpsc322, Lecture 2
CS 188: Artificial Intelligence Fall 2008
Problem Solving and Searching
EA C461 – Artificial Intelligence
Searching for Solutions
Informed search algorithms
Informed search algorithms
Problem Solving and Searching
Computer Science cpsc322, Lecture 10
Search: Advanced Topics Computer Science cpsc322, Lecture 9
Branch and Bound Searching Strategies
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
CSE 473 University of Washington
Introducing Underestimates
CPSC 322 Introduction to Artificial Intelligence
Chap 4: Searching Techniques
HW 1: Warmup Missionaries and Cannibals
More advanced aspects of search
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Iterative Deepening and Branch & Bound CPSC 322 – Search 6 Textbook § 3.7.3 and 3.7.4 January 24, 2011

Lecture Overview Recap from last week Iterative Deepening Branch & Bound

Search with Costs Sometimes there are costs associated with arcs. In this setting we often don't just want to find any solution we usually want to find the solution that minimizes cost Def.: The cost of a path is the sum of the costs of its arcs Def.: A search algorithm is optimal if when it finds a solution, it is the best one: it has the lowest path cost

Lowest-Cost-First Search (LCFS) Expands the path with the lowest cost on the frontier. The frontier is implemented as a priority queue ordered by path cost. How does this differ from Dijkstra's algorithm? The two algorithms are very similar But Dijkstra’s algorithm works with nodes not with paths stores one bit per node (infeasible for infinite/very large graphs) checks for cycles

Heuristic search Def.: A search heuristic h(n) is an estimate of the cost of the optimal (cheapest) path from node n to a goal node. Estimate: h(n1) n1 Estimate: h(n2) n2 n3 Estimate: h(n3)

Best-First Search (LCFS) Expands the path with the lowest h value on the frontier. The frontier is implemented as a priority queue ordered by h. Greedy: expands path that appears to lead to the goal quickest Can get trapped Can yield arbitrarily poor solutions But with a perfect heuristic, it moves straight to the goal

A* Expands the path with the lowest cost + h value on the frontier The frontier is implemented as a priority queue ordered by f(p) = cost(p) + h(p)

Admissibility of a heuristic Def.: Let c(n) denote the cost of the optimal path from node n to any goal node. A search heuristic h(n) is called admissible if h(n) ≤ c(n) for all nodes n, i.e. if for all nodes it is an underestimate of the cost to any goal. E.g. Euclidian distance in routing networks General construction of heuristics: relax the problem, i.e. ignore some constraints Can only make it easier Saw lots of examples on Wednesday: Routing network, grid world, 8 puzzle, Infinite Mario

Admissibility of A* A* is complete (finds a solution, if one exists) and optimal (finds the optimal path to a goal) if: the branching factor is finite arc costs are > 0 h is admissible. This property of A* is called admissibility of A* The fact that h does not overestimate the cost of a path means makes A*optimistic, and thus prevents it from discarding paths that could be good but have been wrongly judged by the heuristic. AFC: DID NOT MENTION THIS< DID NOT SHOW A DETAILED EXAMPLE OF HOW A* WORKS IN APPLET. SHOULD DO IT FOR NEXT TIME I GOT THIS FAR in 65’.

Why is A* admissible: complete If there is a solution, A* finds it: fmin:= cost of optimal solution path s (unknown but finite) Lemmas for prefix pr of s (exercise: prove at home) Has cost f(pr) ≤ fmin (due to admissibility) Always one such pr on the frontier (prove by induction) A* only expands paths with f(p) ≤ fmin Expands paths p with minimal f(p) Always a pr on the frontier, with f(pr) ≤ fmin Terminates when expanding s Number of paths p with cost f(p) ≤ fmin is finite Let cmin > 0 be the minimal cost of any arc k := fmin / cmin. All paths with length > k have cost > fmin Only bk paths of length k. Finite b  finite

Why is A* admissible: optimal Proof by contradiction Assume (for contradiction): First solution s’ that A* expands is suboptimal: i.e. cost(s’) > fmin Since s’ is a goal, h(s’) = 0, and f(s’) = cost(s’) > fmin A* selected s’  all other paths p on the frontier had f(p)  f(s’) > fmin But we know that a prefix pr of optimal solution path s is on the frontier, with f(pr) ≤ fmin  Contradiction ! Summary: any prefix of optimal solution is expanded before suboptimal solution would be expanded

Learning Goals for last week Select the most appropriate algorithms for specific problems Depth-First Search vs. Breadth-First Search vs. Least-Cost-First Search vs. Best-First Search vs. A* Define/read/write/trace/debug different search algorithms With/without cost Informed/Uninformed Construct heuristic functions for specific search problems Formally prove A* optimality Define optimal efficiency

Learning Goals for last week, continued Apply basic properties of search algorithms: completeness, optimality, time and space complexity Complete Optimal Time Space DFS N (Y if no cycles) O(bm) O(mb) BFS Y LCFS (when arc costs available) Costs > 0 Costs  0 Best First (when h available) A* (when arc costs and h available) Costs > 0 h admissible h admissible

Lecture Overview Recap from last week Iterative Deepening Branch & Bound

Iterative Deepening DFS (short IDS): Motivation Want low space complexity but completeness and optimality Key Idea: re-compute elements of the frontier rather than saving them Complete Optimal Time Space DFS N (Y if no cycles) O(bm) O(mb) BFS Y LCFS (when arc costs available) Costs > 0 Costs >=0 Best First (when h available) A* (when arc costs and h available) Costs > 0 h admissible h admissible

Iterative Deepening DFS (IDS) in a Nutshell Use DSF to look for solutions at depth 1, then 2, then 3, etc For depth D, ignore any paths with longer length Depth-bounded depth-first search depth = 1 depth = 2 depth = 3 . . .

(Time) Complexity of IDS That sounds wasteful! Let’s analyze the time complexity For a solution at depth m with branching factor b Depth Total # of paths at that level #times created by BFS (or DFS) #times created by IDS Total #paths for IDS 1 b 2 b2 . m-1 bm-1 m bm m mb (m-1) b2 m-1 2 2 bm-1 1 bm

(Time) Complexity of IDS Solution at depth m, branching factor b Total # of paths generated: bm + 2 bm-1 + 3 bm-2 + ...+ mb = bm (1 b0 + 2 b-1 + 3 b-2 + ...+ m b1-m ) Geometric progression: for |r|<1:

Further Analysis of Iterative Deepening DFS (IDS) Space complexity DFS scheme, only explore one branch at a time Complete? Only finite # of paths up to depth m, doesn’t explore longer paths Optimal? Proof by contradiction O(bm) O(mb) O(bm) O(b+m) Yes No Yes No

Search methods so far Complete Optimal Time Space DFS N (Y if no cycles) O(bm) O(mb) BFS Y IDS LCFS (when arc costs available) Costs > 0 Costs >=0 Best First (when h available) A* (when arc costs and h available) Costs > 0 h admissible h admissible

(Heuristic) Iterative Deepening: IDA* Like Iterative Deepening DFS But the depth bound is measured in terms of the f value If you don’t find a solution at a given depth Increase the depth bound: to the minimum of the f-values that exceeded the previous bound

Analysis of Iterative Deepening A* (IDA*) Complete and optimal? Same conditions as A* h is admissible all arc costs > 0 finite branching factor Time complexity: O(bm) Space complexity: Same argument as for Iterative Deepening DFS O(bm) O(mb) O(bm) O(b+m)

Lecture Overview Recap from last week Iterative Deepening Branch & Bound

Heuristic DFS Other than IDA*, can we use heuristic information in DFS? When we expand a node, put all its neighbours on the stack In which order? Can use heuristic guidance: h or f Perfect heuristic: would solve problem without any backtracking Heuristic DFS is very frequently used in practice Often don’t need optimal solution, just some solution No requirement for admissibility of heuristic As long as we don’t end up in infinite paths

Branch-and-Bound Search Another way to combine DFS with heuristic guidance Follows exactly the same search path as depth-first search But to ensure optimality, it does not stop at the first solution found It continues, after recording upper bound on solution cost upper bound: UB = cost of the best solution found so far Initialized to  or any overestimate of solution cost When a path p is selected for expansion: Compute LB(p) = f(p) = cost(p) + h(p) If LB(p) UB, remove p from frontier without expanding it (pruning) Else expand p, adding all of its neighbors to the frontier Requires admissible h

Example Solution! UB = 5 Solution! UB = ? Arc cost = 1 h(n) = 0 for every n UB = ∞ Solution! UB = 5 Solution! UB = ?

Example Cost = 5 Prune! (Don’t expand.) Arc cost = 1 h(n) = 0 for every n UB = 5 Cost = 5 Prune! (Don’t expand.)

Example Solution! UB =? Cost = 5 Cost = 5 Prune! Prune! Arc cost = 1 h(n) = 0 for every n UB = 5 Solution! UB =? Cost = 5 Prune! Cost = 5 Prune!

Example Cost = 3 Prune! Cost = 3 Prune! Cost = 3 Prune! Arc cost = 1 h(n) = 0 for every n UB = 3 Cost = 3 Prune! Cost = 3 Prune! Cost = 3 Prune!

Branch-and-Bound Analysis Complete? Can’t handle infinite graphs (but can handle cycles) Optimal? If it halts, the goal will be optimal But it could find a goal and then follow an infinite path … Time complexity: O(bm) Space complexity It’s a DFS YES NO IT DEPENDS YES NO IT DEPENDS O(bm) O(mb) O(bm) O(b+m)

Combining B&B with heuristic guidance We said “Follows exactly the same search path as depth-first search”“ Let’s make that heuristic depth-first search Can freely choose order to put neighbours on the stack Could e.g. use a separate heuristic h’ that is NOT admissible To compute LB(p) Need to compute f value using an admissible heuristic h This combination is used a lot in practice Sudoku solver in assignment 2 will be along those lines But also integrates some logical reasoning at each node

Search methods so far Complete Optimal Time Space DFS N (Y if no cycles) O(bm) O(mb) BFS Y IDS LCFS (when arc costs available) Costs > 0 Costs >=0 Best First (when h available) A* (when arc costs and h available) Costs > 0 h admissible h admissible IDA* Y (same cond. as A*) Branch & Bound

Memory-bounded A* Iterative deepening A* and B & B use little memory What if we've got more memory, but not O(bm)? Do A* and keep as much of the frontier in memory as possible When running out of memory delete worst path (highest f value) from frontier Back its f value up to a common ancestor Subtree gets regenerated only when all other paths have been shown to be worse than the “forgotten” path Details are beyond the scope of the course, but Complete and optimal if solution is at depth manageable for available memory

Learning Goals for today’s class Define/read/write/trace/debug different search algorithms New: Iterative Deepening, Iterative Deepening A*, Branch & Bound Apply basic properties of search algorithms: completeness, optimality, time and space complexity Announcements: New practice exercises are out: see WebCT Heuristic search Branch & Bound Please use these! (Only takes 5 min. if you understood things…) Assignment 1 is out: see WebCT