Download presentation
Presentation is loading. Please wait.
1
More advanced aspects of search
Extensions of A* Concluding comments
2
Iterated deepening A* Simplified Memory-bounded A*
Extensions of A* Iterated deepening A* Simplified Memory-bounded A*
3
Iterative-deepening A*
4
Memory problems with A*
A* is similar to breadth-first: Breadth- first d = 1 d = 2 d = 3 d = 4 Expand by depth-layers f1 f2 f3 f4 A* Expands by f-contours Here: 2 extensions of A* that improve memory usage.
5
Iterative deepening A*
Depth-first in each f- contour Perform DEPTH-FIRST search LIMITED to some f-bound. If goal found: ok. Else: increase de f-bound and restart. f4 f3 f2 f1 How to establish the f-bounds? - initially: f(S) generate all successors record the minimal f(succ) > f(S) Continue with minimal f(succ) instead of f(S)
6
Example: S f=100 A f=120 B f=130 C D f=140 G f=125 E F
f-limited, f-bound = 100 f-new = 120
7
Example: S f=100 A f=120 B f=130 C D f=140 G f=125 E F
f-limited, f-bound = 120 f-new = 125
8
Example: SUCCESS S f=100 A f=120 B f=130 C D f=140 G f=125 E F
f-limited, f-bound = 125 SUCCESS
9
f-limited search: 1. QUEUE <-- path only containing the root;
f-bound <-- <some natural number>; f-new <-- 2. WHILE QUEUE is not empty AND goal is not reached DO remove the first path from the QUEUE; create new paths (to all children); reject the new paths with loops; add the new paths with f(path) f-bound to front of QUEUE; f-new <-- minimum of current f-new and of the minimum of new f-values which are larger than f-bound 3. IF goal reached THEN success; ELSE report f-new ;
10
Iterative deepening A*:
1. f-bound <-- f(S) 2. WHILE goal is not reached DO perform f-limited search; f-bound <-- f-new
11
Properties of IDA* Complete and optimal: Memory:
under the same conditions as for A* Memory: Let be the minimal cost of an arc: == O( b* (cost(B) /) ) Speed: depends very strongly on the number of f-contours there are !! In the worst case: f(p) f(q) for every 2 paths: ….+ N = O(N2)
12
Why is this optimal, even without monotonicity ??
In absence of Monotonicity: we can have search spaces like: S A B C D F E 100 120 150 90 60 140 If f can decrease, how can we be sure that the first goal reached is the optimal one ??? HOMEWORK
13
Properties: practical
If there are only a reduced number of different contours: IDA* is one of the very best optimal search techniques ! Example: the 8-puzzle But: also for MANY other practical problems Else, the gain of the extended f-contour is not sufficient to compensate recalculating the previous In such cases: increase f-bound by a fixed number at each iteration: effects: less re-computations, BUT: optimality is lost: obtained solution can deviate up to
14
Simplified Memory-bounded A*
15
Simplified Memory-bounded A*
Fairly complex algorithm. Optimizes A* to work within reduced memory. Key idea: S A B C 13 15 memory of 3 nodes only If memory is full and we need to generate an extra node (C): Remove the highest f-value leaf from QUEUE (A). Remember the f-value of the best ‘forgotten’ child in each parent node (15 in S). (15) 18 B
16
Generate children 1 by 1 S A B 13 First add A, later B When expanding a node (S), only add its children 1 at a time to QUEUE. we use left-to-right Avoids memory overflow and allows monitoring of whether we need to delete another node A B
17
Too long path: give up If extending a node would produce a path longer than memory: give up on this path (C). Set the f-value of the node (C) to (to remember that we can’t find a path here) S B C 13 memory of 3 nodes only D 18 B C
18
Adjust f-values If all children M of a node N have been explored and for all M: f(S...M) f(S...N) then reset: f(S…N) = min { f(S…M) | M child of N} A path through N needs to go through 1 of its children ! S A B 13 15 24 15 better estimate for f(S)
19
SMA*: an example: S A B C D E F S S S A S B A A B A D G1 G2 G3 G4
0+12=12 8+5=13 10+5=15 24+0=24 16+2=18 20+0=20 20+5=25 30+5=35 30+0=30 24+5=29 10 8 16 S 12 S 12 S 12 A 15 12 13 S 13 B A 15 (15) A 15 B 13 A 15 D 18
20
Example: continued () C C C S B D S A B C D E F S B D S B
13 (15) B 13 D S A B C G1 D G2 E G3 G4 F 0+12=12 8+5=13 10+5=15 24+0=24 16+2=18 20+0=20 20+5=25 30+5=35 30+0=30 24+5=29 10 8 16 S 13 B D (15) 15 13 S 15 B 24 () (15) G2 (15) S 15 A B 24 (24) S 15 (24) A C 15 20 15 A 15 B 24 13 () 20 () () 24 15 D G2 24 G2 24 C 25 C G1 20
21
SMA*: properties: Complete: If available memory allows to store the shortest path. Optimal: If available memory allows to store the best path. Otherwise: returns the best path that fits in memory. Memory: Uses whatever memory available. Speed: If enough memory to store entire tree: same as A*
22
More on non-optimal methods The optimality Trade-off
Concluding comments More on non-optimal methods The optimality Trade-off
23
Non-optimal variants Sometimes ‘non-admissible’ heuristics desirable: Example: symmetry 7 8 1 6 2 5 4 3 better than but cannot be captured with underestimating h = Use non-admissible A*.
24
Non-optimal variants (2)
Reduce the weight of the cost in f: f(S…N) = * cost(S…N) + h(N) , 0 1 = 0 : pure heuristic best first (Greedy search) = 1 : A*
25
Approaching the complexity
Optimal path finding is by nature NP complete ! Polynomial parallel algorithms exist, but ALL KNOWN sequential algorithms are exponential The trade-off: either use algorithms that; ALWAYS give the optimal path in the worst case (depending on the actual search space !) , behave exponential in the average case are polynomial
26
Complexity continued:
OR, use algorithms that: ALWAYS produce solutions in polynomial time in the worst case (actual search space), the solution is far form the optimal one in the average case, it is close to the optimal one Examples: local search, non-admissible A*, 1 .
27
Example: traveling salesman with minimal cost
Assume there are N cities: city1 city2 city3 cityN-1 ... N-1 N-2 Speed: ~ N 2 (= …+ N-1) Worst case: solution found/ best solution log2(N+1)/2 Average case: solution found ~ 20% longer than best
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.