Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Chapter 4: Informed Heuristic Search
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
The A* Algorithm Héctor Muñoz-Avila. The Search Problem Starting from a node n find the shortest path to a goal node g ?
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CS 416 Artificial Intelligence Lecture 4 Uninformed Searches (cont) Lecture 4 Uninformed Searches (cont)
Problem Solving by Searching Search Methods : informed (Heuristic) search.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
CSC3203: AI for Games Informed search (1) Patrick Olivier
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
G5AIAI Introduction to AI Graham Kendall Heuristic Searches.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Department of Computer Science
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
The A* Algorithm Héctor Muñoz-Avila.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
Artificial Intelligence
CS 416 Artificial Intelligence
Midterm Review.
Informed Search.
“If I Only had a Brain” Search
Presentation transcript:

Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004

CS 471/598 by H. Liu2 What we’ll learn Informed search algorithms are more efficient in most cases What are Informed search methods How to use problem-specific knowledge How to optimize a solution

CS 471/598 by H. Liu3 Best-First Search Evaluation function gives a measure which node to expand Minimizing estimated cost to reach a goal Greedy search at node n heuristic function h(n)  an example is straight-line distance (Fig 4.1) The simple Romania map Finding the route using greedy search – example (Fig 4.2)

CS 471/598 by H. Liu4 Best-first search (2) h(n) is independent of the path cost g(n) Minimizing the total path cost f(n) = g(n) + h(n) estimated cost of the cheapest solution thru n Admissible heuristic function h never overestimates the cost optimistic

CS 471/598 by H. Liu5 A* search How it works (Fig 4.3) Characteristics of A* Monotonicity - nondescreasing Tree-search to ensure monotonicity Contours (Fig 4.4) - from circle to oval (ellipse) Proof of the optimality of A* The completeness of A* (Fig 4.4 Contours) Complexity of A* (time and space) For most problems, the number of nodes within the goal contour search space is till exponential in the length of the solution

CS 471/598 by H. Liu6 Different Search Strategies Uniform-cost search minimize the path cost so far Greedy search minimize the estimated path cost A* minimize the total path cost Time and space issues of A*  Designing good heuristic functions  A* usually runs out of space long before it runs out of time

CS 471/598 by H. Liu7 Heuristic Functions An example (the 8-puzzle, Fig 4.7) How simple can a heuristic be?  The distance to its correct pisition  Using Manhattan distance What is a good heuristic? Effective branching factor - close to 1 (Why?) Value of h  not too large - must be admissible (Why?)  not too small - ineffective (oval to circle) (expanding all nodes with f (n) < f*) Goodness measure - no. of nodes expanded (Fig 4.8)

CS 471/598 by H. Liu8 Domination translates directly into efficiency Larger h means smaller branching factor If h2 >= h1, is h2 always better than h1?  Proof? (h1 <= h2 <= C* - g) Inventing heuristic functions Working on relaxed problems  remove some constraints

CS 471/598 by H. Liu9 8-puzzle revisited Definition: A tile can move from A to B if A is horizontally or vertically adjacent to B and B is blank Relaxation by removing one or both the conditions A tile can move from A to B if A ~ B A tile can move from A to B if B is blank A tile can move from A to B Deriving a heuristic from the solution cost of a subproblem Fig 4.9

CS 471/598 by H. Liu10 If we have admissible h 1 … h m and none dominates, we can have for node n h = max(h 1, …, h m ) Feature selection and combination use only relevant features  “number of misplaced tiles” as a feature The cost of heuristic function calculation <= the cost of expanding a node otherwise, we need to rethink. Learning heuristics from experience Each optimal solution to 8-puzzle provides a learning example

CS 471/598 by H. Liu11 Improving A* - memory-bounded heuristic search Iterative-deepening A* (IDA*) Using f-cost(g+h) rather than the depth Cutoff value is the smallest f-cost of any node that exceeded the cutoff on the previous iteration Space complexity O(bd) Recursive best-first search (RBFS) Best-first search using only linear space (Fig 4.5) It replaces the f-value of each node along the path with the best f- value of its children (Fig 4.6) Space complexity O(bd) Simplified memory bounded A* (SMA*) IDA* and RBFS use too little memory – excessive node regeneration Expanding the best leaf until memory is full Dropping the worst leaf node (highest f-value) by backing up to its parent

CS 471/598 by H. Liu12 Local Search Algorithms and Optimization Problems Global and local optima Fig 4.10, from current state to global maximum Hill-climbing (maximization) Well know drawbacks (Fig 4.13)  Local maxima, Plateaus, Ridges Random-restart Simulated annealing Gradient descent (minimization) Escaping the local minima by controlled bouncing Local beam search Keeping track of k states instead of just one Genetic algorithms

CS 471/598 by H. Liu13 Online Search Offline search – computing a complete solution before acting Online search – interleaving computation and action Solving an exploration problem where the states and actions are unknown to the agent Good for domains where there is a penalty for computing too long, or for stochastic domains

CS 471/598 by H. Liu14 Online search problems An agent know: e.g., Fig 4.18 Actions(s) in state s Step-cost function c(s,a,s’) Goal-Test(s) Others: with memory of states visited, and admissible heuristic from current state to the goal state Objective: Reaching a goal state while minimizing cost

CS 471/598 by H. Liu15 Measuring its performance Competitive ratio: the true path cost over the path cost if it knew the search space in advance The best achievable competitive ratio can be infinite If some actions are irreversible, it may reach a dead-end ( Fig 4.19 (a)) An adversary argument – Fig 4.19 (b) No bounded competitive ratio can be guaranteed if there are paths of unbounded cost

CS 471/598 by H. Liu16 Online search agents It can expand only a node that it physically occupies, so it should expand nodes in a local order Online Depth-First Search (Fig 4.20) Backtracking requires that actions are reversible Hill-climbing search keeps one current state in memory It can get stuck in a local minimum Random restart does not work here Random walk selects at random one of the available actions from the current state  It can be very slow, Fig 4.21 Augmenting hill climbing with memory rather than randomness is more effective  H(s) is updated as the agent gains experience, Fig 4.22

CS 471/598 by H. Liu17 Summary Heuristics are the key to reducing research costs f(n) = g(n)+h(n) A* is complete, optimal, and optimally efficient among all optimal search algorithms, but... Iterative improvement algorithms are memory efficient, but... Local search Online search is different from offline search