Artificial Intelligence for Games Informed Search (2) Patrick Olivier

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.
Artificial Intelligence for Games Uninformed search Patrick Olivier
PATHFINDING WITH A* Presented by Joseph Siefers February 19 th, 2008.
Artificial Intelligence for Games Depth limited search Patrick Olivier
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Review: Search problem formulation
Informed (Heuristic) Search Evaluation Function returns a value estimating the desirability of expanding a frontier node Two Basic Approaches –Expand node.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search CSE 473 University of Washington.
Trading optimality for speed…
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Methods
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed search algorithms
More advanced aspects of search Extensions of A*.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Artificial Intelligence for Games Online and local search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
CSC3203: AI for Games Informed search (1) Patrick Olivier
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
ARTIFICIAL INTELLIGENCE Dr. Seemab Latif Lecture No. 4.
1 Intro to AI Informed Search. 2 Intro to AI Heuristic search Best-first search –Greedy search –Beam search –A, A* –Examples Memory-conserving variations.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search and Exploration
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
CSE 4705 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Heuristic Search Generate and Test Hill Climbing Best First Search
Midterm Review.
Informed Search.
Presentation transcript:

Artificial Intelligence for Games Informed Search (2) Patrick Olivier

Heuristic functions sample heuristics for 8-puzzle: –h 1 (n) = number of misplaced tiles –h 2 (n) = total Manhattan distance h 1 (S) = ? h 2 (S) = ?

Heuristic functions sample heuristics for 8-puzzle: –h 1 (n) = number of misplaced tiles –h 2 (n) = total Manhattan distance h 1 (S) = 8 h 2 (S) = = 18 dominance: –h 2 (n) ≥ h 1 (n) for all n (both admissible) –h 2 is better for search (closer to perfect) –less nodes need to be expanded

Example of dominance randomly generate 8-puzzle problems 100 examples for each solution depth contrast behaviour of heuristics & strategies d IDS ………………… A*(h1) A*(h2)

A* enhancements & local search Memory enhancements –IDA*: Iterative-Deepening A* –SMA*: Simplified Memory-Bounded A* Other enhancements (next lecture) –Dynamic weighting –LRTA*: Learning Real-time A* –MTS: Moving target search Local search (next lecture) –Hill climbing & beam search –Simulated annealing & genetic algorithms

Improving A* performance Improving the heuristic function –not always easy for path planning tasks Implementation of A* –key aspect for large search spaces Relaxing the admissibility condition –trading optimality for speed

IDA*: iterative deepening A* reduces the memory constraints of A* without sacrificing optimality cost-bound iterative depth-first search with linear memory requirements expands all nodes within a cost contour store f-cost (cost-limit) for next iteration repeat for next highest f-cost

Order of expansion: –Move space up –Move space down –Move space left –Move space right Evaluation function: –g(n) = number of moves –h(n) = misplaced tiles Expand the state space to a depth of 3 and calculate the evaluation function IDA*: exercise Start state X Goal state X

Next f-cost = = = =6 1+3= X = X IDA*: f-cost = 3 Next f-cost = 3 Next f-cost = 4

IDA*: f-cost = = = = =4 0+3= X Next f-cost = 4 Next f-cost = = = =4

Simplified memory-bounded A* SMA* –When we run out of memory drop costly nodes –Back their cost up to parent (may need them later) Properties –Utilises whatever memory is available –Avoids repeated states (as memory allows) –Complete (if enough memory to store path) –Optimal (or optimal in memory limit) –Optimally efficient (with memory caveats)

Simple memory-bounded A*

Class exercise Use the state space given in the example Execute the SMA* algorithm over this state space Be sure that you understand the algorithm!

Simple memory-bounded A*

Trading optimality for speed… The admissibility condition guarantees that an optimal path is found In path planning a near-optimal path can be satisfactory Try to minimise search instead of minimising cost: –i.e. find a near-optimal path (quickly)

Weighting… f w (n) = (1 - w).g(n) + w.h(n) –w = 0.0 (breadth-first) –w = 0.5 (A*) –w = 1.0 (best-first, with f = h) trading optimality for speed weight towards h when confident in the estimate of h