More advanced aspects of search

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Notes Dijstra’s Algorithm Corrected syllabus.
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
PROBLEM SOLVING AND SEARCH
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
More advanced aspects of search Extensions of A*.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
BackTracking CS255.
Last time: search strategies
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
CSE (c) S. Tanimoto, 2002 Search Algorithms
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 188: Artificial Intelligence Fall 2008
Course Outline 4.2 Searching with Problem-specific Knowledge
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Artificial Intelligence Chapter 9 Heuristic Search
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
CSE 473 University of Washington
Heuristic Search Methods
Introducing Underestimates
Mini-Max search Alpha-Beta pruning General concerns on games
Lecture 9 Administration Heuristic search, continued
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Iterative Deepening and Branch & Bound
State-Space Searches.
State-Space Searches.
The Rich/Knight Implementation
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
CSE (c) S. Tanimoto, 2004 Search Algorithms
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
The Rich/Knight Implementation
Basic Search Methods How to solve the control problem in production-rule systems? Basic techniques to find paths through state- nets. For the moment: -
Presentation transcript:

More advanced aspects of search Extensions of A* Concluding comments

Iterated deepening A* Simplified Memory-bounded A* Extensions of A* Iterated deepening A* Simplified Memory-bounded A*

Iterative-deepening A*

Memory problems with A* A* is similar to breadth-first: Breadth- first d = 1 d = 2 d = 3 d = 4 Expand by depth-layers f1 f2 f3 f4 A* Expands by f-contours Here: 2 extensions of A* that improve memory usage.

Iterative deepening A* Depth-first in each f- contour Perform DEPTH-FIRST search LIMITED to some f-bound. If goal found: ok. Else: increase de f-bound and restart. f4 f3 f2 f1 How to establish the f-bounds? - initially: f(S) generate all successors record the minimal f(succ) > f(S) Continue with minimal f(succ) instead of f(S)

Example: S f=100 A f=120 B f=130 C D f=140 G f=125 E F f-limited, f-bound = 100 f-new = 120

Example: S f=100 A f=120 B f=130 C D f=140 G f=125 E F f-limited, f-bound = 120 f-new = 125

Example: SUCCESS S f=100 A f=120 B f=130 C D f=140 G f=125 E F f-limited, f-bound = 125 SUCCESS

f-limited search: 1. QUEUE <-- path only containing the root; f-bound <-- <some natural number>; f-new <--  2. WHILE QUEUE is not empty AND goal is not reached DO remove the first path from the QUEUE; create new paths (to all children); reject the new paths with loops; add the new paths with f(path)  f-bound to front of QUEUE; f-new <-- minimum of current f-new and of the minimum of new f-values which are larger than f-bound 3. IF goal reached THEN success; ELSE report f-new ;

Iterative deepening A*: 1. f-bound <-- f(S) 2. WHILE goal is not reached DO perform f-limited search; f-bound <-- f-new

Properties of IDA* Complete and optimal: Memory: under the same conditions as for A* Memory: Let  be the minimal cost of an arc: == O( b* (cost(B) /) ) Speed: depends very strongly on the number of f-contours there are !! In the worst case: f(p)  f(q) for every 2 paths: 1 + 2 + ….+ N = O(N2)

Why is this optimal, even without monotonicity ?? In absence of Monotonicity: we can have search spaces like: S A B C D F E 100 120 150 90 60 140 If f can decrease, how can we be sure that the first goal reached is the optimal one ??? HOMEWORK

Properties: practical If there are only a reduced number of different contours: IDA* is one of the very best optimal search techniques ! Example: the 8-puzzle But: also for MANY other practical problems Else, the gain of the extended f-contour is not sufficient to compensate recalculating the previous In such cases: increase f-bound by a fixed number  at each iteration: effects: less re-computations, BUT: optimality is lost: obtained solution can deviate up to 

Simplified Memory-bounded A*

Simplified Memory-bounded A* Fairly complex algorithm. Optimizes A* to work within reduced memory. Key idea: S A B C 13 15 memory of 3 nodes only If memory is full and we need to generate an extra node (C): Remove the highest f-value leaf from QUEUE (A). Remember the f-value of the best ‘forgotten’ child in each parent node (15 in S). (15) 18 B

Generate children 1 by 1 S A B 13 First add A, later B When expanding a node (S), only add its children 1 at a time to QUEUE. we use left-to-right Avoids memory overflow and allows monitoring of whether we need to delete another node A B

Too long path: give up If extending a node would produce a path longer than memory: give up on this path (C). Set the f-value of the node (C) to  (to remember that we can’t find a path here) S B C 13 memory of 3 nodes only D 18 B C 

Adjust f-values If all children M of a node N have been explored and for all M: f(S...M)  f(S...N) then reset: f(S…N) = min { f(S…M) | M child of N} A path through N needs to go through 1 of its children ! S A B 13 15 24 15 better estimate for f(S)

SMA*: an example:  S A B C D E F S S S A S B A A B A D G1 G2 G3 G4 0+12=12 8+5=13 10+5=15 24+0=24 16+2=18 20+0=20 20+5=25 30+5=35 30+0=30 24+5=29 10 8 16 S 12 S 12 S 12 A 15 12 13 S 13 B A 15 (15) A 15 B 13 A 15 D 18 

Example: continued    ()    C C C S B D S A B C D E F S B D S B 13 (15) B 13 D  S A B C G1 D G2 E G3 G4 F 0+12=12 8+5=13 10+5=15 24+0=24 16+2=18 20+0=20 20+5=25 30+5=35 30+0=30 24+5=29 10 8 16 S 13 B D  (15) 15 13 S 15 B 24 () (15) G2 (15) S 15 A B 24 (24) S 15 (24) A C  15 20 15 A 15 B 24 13 () 20 () () 24 15 D  G2 24 G2 24 C 25 C  G1 20 

SMA*: properties: Complete: If available memory allows to store the shortest path. Optimal: If available memory allows to store the best path. Otherwise: returns the best path that fits in memory. Memory: Uses whatever memory available. Speed: If enough memory to store entire tree: same as A*

More on non-optimal methods The optimality Trade-off Concluding comments More on non-optimal methods The optimality Trade-off

Non-optimal variants Sometimes ‘non-admissible’ heuristics desirable: Example: symmetry 7 8 1 6 2 5 4 3 better than but cannot be captured with underestimating h = Use non-admissible A*.

Non-optimal variants (2) Reduce the weight of the cost in f: f(S…N) =  * cost(S…N) + h(N) , 0    1  = 0 : pure heuristic best first (Greedy search)  = 1 : A*

Approaching the complexity Optimal path finding is by nature NP complete ! Polynomial parallel algorithms exist, but ALL KNOWN sequential algorithms are exponential The trade-off: either use algorithms that; ALWAYS give the optimal path in the worst case (depending on the actual search space !) , behave exponential in the average case are polynomial

Complexity continued: OR, use algorithms that: ALWAYS produce solutions in polynomial time in the worst case (actual search space), the solution is far form the optimal one in the average case, it is close to the optimal one Examples: local search, non-admissible A*,   1 .

Example: traveling salesman with minimal cost Assume there are N cities: city1 city2 city3 cityN-1 ... N-1 N-2 Speed: ~ N 2 (= 1 + 2 + 3 …+ N-1) Worst case: solution found/ best solution  log2(N+1)/2 Average case: solution found ~ 20% longer than best