Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Informed search.
Review: Search problem formulation
Informed (Heuristic) Search Evaluation Function returns a value estimating the desirability of expanding a frontier node Two Basic Approaches –Expand node.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search CSE 473 University of Washington.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Informed search algorithms
Informed (Heuristic) Search
Informed search algorithms
January 31, 2006AI: Chapter 4: Informed Search and Exploration 1 Artificial Intelligence Chapter 4: Informed Search and Exploration Michael Scherger Department.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Chapter 4 Informed Search and Exploration. Outline Informed (Heuristic) search strategies  (Greedy) Best-first search  A* search (Admissible) Heuristic.
Informed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #3.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
ARTIFICIAL INTELLIGENCE Dr. Seemab Latif Lecture No. 4.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches Informed searches easy
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
CS 416 Artificial Intelligence
Midterm Review.
Informed Search.
Presentation transcript:

Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005

CSE 471/598, CBS 598 by H. Liu2 What we’ll learn Informed search algorithms are more efficient in most cases What are informed search methods How to use problem-specific knowledge How to optimize a solution

CSE 471/598, CBS 598 by H. Liu3 Best-First Search Evaluation function gives a measure about which node to expand Minimizing estimated cost to reach a goal Greedy search at node n heuristic function h(n)  an example is straight-line distance (Fig 4.1) The simple Romania map Finding the route using greedy search – example (Fig 4.2)

CSE 471/598, CBS 598 by H. Liu4 Best-first search (2) h(n) is independent of the path cost g(n) Minimizing the total path cost f(n) = g(n) + h(n) estimated cost of the cheapest solution thru n Admissible heuristic function h never overestimates the cost optimistic

CSE 471/598, CBS 598 by H. Liu5 A* search How it works (Fig 4.3) Characteristics of A* Monotonicity (Consistency) - h is nondescreasing  How to check – using triangle inequality Tree-search to ensure monotonicity Contours (Fig 4.4) - from circle to oval (ellipse) Proof of the optimality of A* The completeness of A* (Fig 4.4 Contours) Complexity of A* (time and space) For most problems, the number of nodes within the goal contour search space is till exponential in the length of the solution

CSE 471/598, CBS 598 by H. Liu6 Different Search Strategies Uniform-cost search minimize the path cost so far Greedy search minimize the estimated path cost A* minimize the total path cost Time and space issues of A*  Designing good heuristic functions  A* usually runs out of space long before it runs out of time

CSE 471/598, CBS 598 by H. Liu7 Heuristic Functions An example (the 8-puzzle, Fig 4.7) How simple can a heuristic be?  The distance to its correct position  Using Manhattan distance What is a good heuristic? Effective branching factor - close to 1 (Why?) Value of h  not too large - must be admissible (Why?)  not too small - ineffective (oval to circle) (expanding all nodes with f (n) < f*) Goodness measures - no. of nodes expanded and branching factor (Fig 4.8)

CSE 471/598, CBS 598 by H. Liu8 Domination translates directly into efficiency Larger h means smaller branching factor If h2 >= h1, is h2 always better than h1?  Proof? (h1 <= h2 <= C* - g) Inventing heuristic functions Working on relaxed problems  remove some constraints

CSE 471/598, CBS 598 by H. Liu9 8-puzzle revisited Definition: A tile can move from A to B if A is horizontally or vertically adjacent to B and B is blank Relaxation by removing one or both the conditions A tile can move from A to B if A ~ B A tile can move from A to B if B is blank A tile can move from A to B Deriving a heuristic from the solution cost of a sub-problem Fig 4.9

CSE 471/598, CBS 598 by H. Liu10 If we have admissible h 1 … h m and none dominates, we can have for node n h = max(h 1, …, h m ) Feature selection and combination use only relevant features  “number of misplaced tiles” as a feature The cost of heuristic function calculation <= the cost of expanding a node otherwise, we need to rethink. Learning heuristics from experience Each optimal solution to 8-puzzle provides a learning example

CSE 471/598, CBS 598 by H. Liu11 Improving A* - memory-bounded heuristic search Iterative-deepening A* (IDA*) Using f-cost(g+h) rather than the depth Cutoff value is the smallest f-cost of any node that exceeded the cutoff on the previous iteration; keep these nodes only Space complexity O(bd) Recursive best-first search (RBFS) Best-first search using only linear space (Fig 4.5) It replaces the f-value of each node along the path with the best f- value of its children (Fig 4.6) Space complexity O(bd) Simplified memory bounded A* (SMA*) IDA* and RBFS use too little memory – excessive node regeneration Expanding the best leaf until memory is full Dropping the worst leaf node (highest f-value) by backing up to its parent

CSE 471/598, CBS 598 by H. Liu12 Local Search Algorithms and Optimization Problems Global and local optima Fig 4.10, from current state to global maximum Hill-climbing (maximization) Well know drawbacks (Fig 4.13)  Local maxima, Plateaus, Ridges Random-restart Simulated annealing Gradient descent (minimization) Escaping the local minima by controlled bouncing Local beam search Keeping track of k states instead of just one Genetic algorithms Selection, cross-over, and mutation

CSE 471/598, CBS 598 by H. Liu13 Online Search Offline search – computing a complete solution before acting Online search – interleaving computation and action Solving an exploration problem where the states and actions are unknown to the agent Good for domains where there is a penalty for computing too long, or for stochastic domains An example – robot is placed in a new building: explore it to build a map that it can use for getting A to B

CSE 471/598, CBS 598 by H. Liu14 Online search problems An agent knows: e.g., Fig 4.18 Actions(s) in state s Step-cost function c(s,a,s’)  c() cannot be used until the agent knows s’ is the outcome  In order to know c(), a must be actually tried Goal-Test(s) Others: with memory of states visited, and admissible heuristic from current state to the goal state Objective: Reaching a goal state while minimizing cost

CSE 471/598, CBS 598 by H. Liu15 Measuring its performance Competitive ratio: the true path cost over the path cost if it knew the search space in advance The best achievable competitive ratio can be 1 If some actions are irreversible, it may reach a dead-end ( Fig 4.19 (a)) An adversary argument – Fig 4.19 (b) No bounded competitive ratio can be guaranteed if there are paths of unbounded cost

CSE 471/598, CBS 598 by H. Liu16 Online search agents It can expand only a node that it physically occupies, so it should expand nodes in a local order Online Depth-First Search (Fig 4.20) Backtracking requires that actions are reversible Hill-climbing search keeps one current state in memory It can get stuck in a local minimum Random restart does not work here Random walk selects at random one of the available actions from the current state  It can be very slow, Fig 4.21 Augmenting hill climbing with memory rather than randomness is more effective  Learning real-time agent, Fig 4.22 H(s) is updated as the agent gains experience Encourages to explore new paths

CSE 471/598, CBS 598 by H. Liu17 Summary Heuristics are the key to reducing research costs f(n) = g(n)+h(n) Understand their variants A* is complete, optimal, and optimally efficient among all optimal search algorithms, but... Iterative improvement algorithms are memory efficient, but... Local search There is a cost associated withit Online search is different from offline search Mainly for exploration problems