Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence for Games Informed Search (2) Patrick Olivier

Similar presentations


Presentation on theme: "Artificial Intelligence for Games Informed Search (2) Patrick Olivier"— Presentation transcript:

1 Artificial Intelligence for Games Informed Search (2) Patrick Olivier p.l.olivier@ncl.ac.uk

2 Heuristic functions sample heuristics for 8-puzzle: –h 1 (n) = number of misplaced tiles –h 2 (n) = total Manhattan distance h 1 (S) = ? h 2 (S) = ?

3 Heuristic functions sample heuristics for 8-puzzle: –h 1 (n) = number of misplaced tiles –h 2 (n) = total Manhattan distance h 1 (S) = 8 h 2 (S) = 3+1+2+2+2+3+3+2 = 18 dominance: –h 2 (n) ≥ h 1 (n) for all n (both admissible) –h 2 is better for search (closer to perfect) –less nodes need to be expanded

4 Example of dominance randomly generate 8-puzzle problems 100 examples for each solution depth contrast behaviour of heuristics & strategies d24681012141618202224 IDS10112680638447127………………… A*(h1)6132039932275391301305672761809439135 A*(h2)6121825397311321136367612191641

5 A* enhancements & local search Memory enhancements –IDA*: Iterative-Deepening A* –SMA*: Simplified Memory-Bounded A* Other enhancements (next lecture) –Dynamic weighting –LRTA*: Learning Real-time A* –MTS: Moving target search Local search (next lecture) –Hill climbing & beam search –Simulated annealing & genetic algorithms

6 Improving A* performance Improving the heuristic function –not always easy for path planning tasks Implementation of A* –key aspect for large search spaces Relaxing the admissibility condition –trading optimality for speed

7 IDA*: iterative deepening A* reduces the memory constraints of A* without sacrificing optimality cost-bound iterative depth-first search with linear memory requirements expands all nodes within a cost contour store f-cost (cost-limit) for next iteration repeat for next highest f-cost

8 Order of expansion: –Move space up –Move space down –Move space left –Move space right Evaluation function: –g(n) = number of moves –h(n) = misplaced tiles Expand the state space to a depth of 3 and calculate the evaluation function IDA*: exercise Start state 1 2 3 6 X 4 8 7 5 Goal state 1 2 3 8 X 4 7 6 5

9 Next f-cost = 5 1 3 6 2 4 8 7 5 1+4=5 1 2 3 6 7 4 8 5 1+3=4 1 2 3 6 4 4 8 7 5 1+4=6 1+3=4 1 2 3 X 6 4 8 7 5 0+3=3 1 2 3 6 X 4 8 7 5 IDA*: f-cost = 3 Next f-cost = 3 Next f-cost = 4

10 IDA*: f-cost = 4 1 2 3 8 6 4 7 5 2+2=4 1 2 3 6 4 8 7 5 2+3=5 1 3 6 2 4 8 7 5 1+4=5 1 2 3 6 7 4 8 7 5 1+3=4 0+3=3 1 2 3 6 X 4 8 7 5 Next f-cost = 4 Next f-cost = 5 1 2 3 8 6 4 7 5 3+3=6 1 2 3 8 6 4 7 5 3+1=4 1 2 3 8 4 7 6 5 4+0=4

11 Simplified memory-bounded A* SMA* –When we run out of memory drop costly nodes –Back their cost up to parent (may need them later) Properties –Utilises whatever memory is available –Avoids repeated states (as memory allows) –Complete (if enough memory to store path) –Optimal (or optimal in memory limit) –Optimally efficient (with memory caveats)

12 Simple memory-bounded A*

13 Class exercise Use the state space given in the example Execute the SMA* algorithm over this state space Be sure that you understand the algorithm!

14 Simple memory-bounded A*

15

16

17

18

19

20

21

22 Trading optimality for speed… The admissibility condition guarantees that an optimal path is found In path planning a near-optimal path can be satisfactory Try to minimise search instead of minimising cost: –i.e. find a near-optimal path (quickly)

23 Weighting… f w (n) = (1 - w).g(n) + w.h(n) –w = 0.0 (breadth-first) –w = 0.5 (A*) –w = 1.0 (best-first, with f = h) trading optimality for speed weight towards h when confident in the estimate of h


Download ppt "Artificial Intelligence for Games Informed Search (2) Patrick Olivier"

Similar presentations


Ads by Google