Download presentation
Presentation is loading. Please wait.
1
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier p.l.olivier@ncl.ac.uk
2
A* enhancements & local search Memory enhancements IDA*: Iterative-Deepening A* SMA*: Simplified Memory-Bounded A* Speed enhancements Dynamic weighting LRTA*: Learning RTA* (variants can be used to chase moving targets) Local search: hill climbing & beam search simulated annealing & genetic algorithms
3
Improving A* performance Improving the heuristic function not always easy for path planning tasks Implementation of A* key aspect for large search spaces Relaxing the admissibility condition trading optimality for speed
4
Improving A* Performance Improving the heuristic function not always easy for path planning tasks Implementation of A* key aspect for large search spaces Relaxing the admissibility condition trading optimality for speed
5
IDA*: iterative deepening A* reduces the memory constraints of A* without sacrificing optimality cost-bound iterative depth-first search with linear memory requirements expands all nodes within a cost contour store f-cost (cost-limit) for next iteration repeat for next highest f-cost
6
IDA*: exercise Start state 1 2 3 6 X 4 8 7 5 Goal state 1 2 3 8 X 4 7 6 5 Order of expansion: Move space up Move space down Move space left Move space right Evaluation function: g(n) = number of moves f(n) = Manhatten distance
7
1 2 3 6 2 4 8 7 5 1+4=5 1 2 3 6 7 4 8 7 5 1+3=4 1 2 3 6 4 4 8 7 5 1+4=6 1+3=4 1 2 3 X 6 4 8 7 5 0+3=3 1 2 3 6 X 4 8 7 5 IDA*: f-cost = 3 Next f-cost = 3Next f-cost = 5 Next f-cost = 4
8
IDA*: f-cost = 4 2 3 1 6 4 8 7 5 2+4=6 1 2 3 8 6 4 7 5 2+2=4 1 2 3 6 7 4 8 5 2+5=7 1 2 3 8 6 4 7 5 2+3=5 1 2 3 6 4 8 7 5 2+3=5 1 2 3 8 6 4 7 5 3+2=5 1 2 3 8 6 4 7 5 3+1=4 1 2 3 8 4 7 6 5 4+2=6 1 2 3 6 2 4 8 7 5 1+4=5 1 2 3 6 7 4 8 7 5 1+3=4 1 2 3 X 6 4 8 7 5 0+3=3 1 2 3 6 X 4 8 7 5 Next f-cost = 4 Next f-cost = 5
9
Simple memory-bounded A* SMA* When we run out of memory drop costly nodes Back their cost up to parent (may need them later) Properties Utilises whatever memory is available Avoids repeated states (as memory allows) Complete (if enough memory to store path) Optimal (or optimal in memory limit) Optimally efficient (with memory caveats)
10
Simple memory-bounded A*
19
Trading optimality for speed… The admissibility condition guarantees that an optimal path is found In path planning a near-optimal path can be satisfactory In which case one would try to minimise search instead of minimising cost i.e. find a near-optimal path, but faster
20
Weighting… )()()1()(nwhngwnf w w = 0.0 (breadth-first) w = 0.5 (A*) w = 1.0 (best-first, with f = h) trading safety/optimality for speed weight towards h when confident in the estimate of h
21
“Real-time” search concepts In A* the whole path is computed off-line, before the agent walks through the path This solution is only valid for static worlds If the world changes in the meantime, the initial path is no longer valid: new obstacles appear position of goal changes (e.g. moving target)
22
“Real-time” definitions Off-line (non real-time): the solution is computed in a given amount of time before being executed Real-time: One move is computed at a time, and that move executed before computing the next Anytime: the algorithm constantly improves its solution through time
23
Learning real-time A* 1 2 3 4
24
Local search algorithms In many optimisation problems, paths are irrelevant; goal state the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens: n queens on an n ×n board with no two queens on the same row, column, or diagonal Use local search algorithms which keep a single "current" state and try to improve it
25
Hill-climbing search "climbing Everest in thick fog with amnesia” we can set up an objective function to be “best” when large (perform hill climbing) …or we can use the previous formulation of heuristic and minimise the objective function (perform gradient descent)
26
Local maxima/minina Problem: depending on initial state, can get stuck in local maxima/minina 1/(1+H(n)) = 1/17 1/(1+H(n)) = 1/2 Local minima
27
Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency and range (VSLI layout, scheduling)
28
Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
29
Genetic algorithm search A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.