Informed Search Methods

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Informed (Heuristic) Search Evaluation Function returns a value estimating the desirability of expanding a frontier node Two Basic Approaches –Expand node.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches Informed searches easy
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Informed Search and Exploration
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Informed search algorithms
Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap
Lecture 9 Administration Heuristic search, continued
Heuristic Search Generate and Test Hill Climbing Best First Search
CS 416 Artificial Intelligence
Midterm Review.
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Solving Problems by Searching
Informed Search.
Presentation transcript:

Informed Search Methods Chapter 4 Fall 2009 Copyright, 1996 © Dale Carnegie & Associates, Inc.

What we’ll learn Informed search algorithms are more efficient in most cases What are informed search methods How to use problem-specific knowledge How to optimize a solution CSE 471/598 by H. Liu

Best-First Search Evaluation functions It gives a measure about which node to expand Minimizing the path cost g(n) – a true cost Expands the node based on the past Minimizing estimated cost to reach a goal Greedy search at node n heuristic function h(n) an example is straight-line distance between cities (Fig 4.1) The simple Romania map Finding the route using greedy search – example (Fig 4.2) CSE 471/598 by H. Liu

Best-first search (2) h(n) is independent of the path cost g(n) Minimizing the total path cost f(n) = g(n) + h(n) estimated cost of the cheapest solution via n Admissible heuristic function h never overestimates the cost What is the most useless h? optimistic One never overestimates CSE 471/598 by H. Liu

A* search How it works (Fig 4.3) Characteristics of A* Monotonicity (Consistency) - h is nondescreasing How to check – using triangle inequality Tree-search to ensure monotonicity Contours (Fig 4.4) - from circle to oval (ellipse) Proof of the optimality of A* The completeness of A* (Fig 4.4 Contours) Complexity of A* (time and space) For most problems, the number of nodes within the goal contour search space is still exponential in the length of the solution The proof sketch is: assuming it reaches a suboptimal goal (its cost is bigger than the optimal cost), then assuming a node along the optimal path, since this node is not chosen over the suboptimal goal, we can arrive at the contradiction. CSE 471/598 by H. Liu

Improving A* - memory-bounded heuristic search Iterative-deepening A* (IDA*) Using f-cost(g+h) rather than the depth Cutoff value is the smallest f-cost of any node that exceeded the cutoff on the previous iteration; keep these nodes only Space complexity O(bd) Recursive best-first search (RBFS) Best-first search using only linear space complexity (Fig 4.5) It replaces the f-value of each node along the path with the best f-value of its children (Fig 4.6) Space complexity O(bd) with excessive node regeneration Simplified memory bounded A* (SMA*) IDA* and RBFS use too little memory – excessive node regeneration Expanding the best leaf until memory is full Dropping the worst leaf node (highest f-value) by backing up to its parent CSE 471/598 by H. Liu

Different Search Strategies Uniform-cost search minimize the path cost so far Greedy search minimize the estimated path cost A* minimize the total path cost Time and space issues of A* Designing good heuristic functions A* usually runs out of space long before it runs out of time CSE 471/598 by H. Liu

Heuristic Functions An example (the 8-puzzle, Fig 4.7) How simple can a heuristic be? The distance to its correct position Using Manhattan distance What is a good heuristic? Effective branching factor - close to 1 (Why?) Value of h not too large - must be admissible (Why?) not too small - ineffective (oval to circle) (expanding all nodes with f (n) < f*) Goodness measures - no. of nodes expanded and branching factor (Fig 4.8) CSE 471/598 by H. Liu

Domination translates directly into efficiency Larger h means smaller branching factor If h2 >= h1, is h2 always better than h1? Proof? (h1 <= h2 <= C* - g) Inventing heuristic functions – An important component of A* How to invent One way is to work on relaxed problems Simplify the problem Remove some constraints C* is the optimal cost, being better means expanding fewer number of nodes How to work collaboratively on a project, yet individual effort is still recognized? CSE 471/598 by H. Liu

8-puzzle revisited Definition: A tile can move from A to B if A is horizontally or vertically adjacent to B and B is blank Relaxation by removing one or both the conditions A tile can move from A to B if A ~ B A tile can move from A to B if B is blank A tile can move from A to B Deriving a heuristic from the solution cost of a sub-problem Fig 4.9 CSE 471/598 by H. Liu

Feature selection and combination If we have admissible h1 … hm and none dominates, we can have for node n h = max(h1, …, hm) Feature selection and combination use only relevant features “number of misplaced tiles” as a feature The cost of heuristic function calculation <= the cost of expanding a node otherwise, we need to rethink. Learning heuristics from experience Each optimal solution to 8-puzzle provides a learning example CSE 471/598 by H. Liu

Local Search Algorithms and Optimization Problems Sometimes the path to the goal constitutes a solution; sometimes, the path to the goal is irrelevant (e.g., 8-queen) Local search algorithms operate using a single current state and generally move only to neighbors of that state. The paths followed by the search are not retained Key advantages: little memory usage; can find reasonable solutions in large or infinite state space where systematic search is not suitable Global and local optima Fig 4.10, from current state to global maximum CSE 471/598 by H. Liu

Some local-search algorithms Hill-climbing (maximization) Well know drawbacks (Fig 4.13) Local maxima, Plateaus, Ridges Random-restart Simulated annealing Gradient descent (minimization) Escaping the local minima by controlled bouncing Local beam search Keeping track of k states instead of just one Is it similar to have k random-start of Hill-climbing? Genetic algorithms Selection, cross-over, and mutation Local bean search is different from k random-start of Hill-climbing. Basically, it keeps the best k states at any time. CSE 471/598 by H. Liu

Online Search Offline search – computing a complete solution before acting Online search – interleaving computation and action Solving an exploration problem where the states and actions are unknown to the agent Good for domains where there is a penalty for computing too long, or for stochastic domains An example – A robot is placed in a new building: explore it to build a map that it can use for getting A to B Any additional examples? Please send it to me if you find one. CSE 471/598 by H. Liu

Online search problems An agent knows: e.g., Fig 4.18 Actions(s) in state s Step-cost function c(s,a,s’) c() cannot be used until the agent knows s’ is the outcome In order to know c(), a must be actually tried Goal-Test(s) Others: with memory of states visited, and admissible heuristic from current state to the goal state Objective: Reaching a goal state while minimizing cost CSE 471/598 by H. Liu

Measuring its performance Competitive ratio: the true path cost over the path cost if it knew the search space in advance The best achievable competitive ratio can be 1 If some actions are irreversible, it may reach a dead-end (Fig 4.19 (a)) An adversary argument – Fig 4.19 (b) No bounded competitive ratio can be guaranteed if there are paths of unbounded cost CSE 471/598 by H. Liu

Online search agents It can expand only a node that it physically occupies, so it should expand nodes in a local order Online Depth-First Search (Fig 4.20) Backtracking requires that actions are reversible Hill-climbing search keeps one current state in memory It can get stuck in a local minimum Random restart does not work here (Why?) Random walk selects at random one of the available actions from the current state It can be very slow, Fig 4.21 Augmenting hill climbing with memory rather than randomness is more effective Learning real-time agent, Fig 4.22 H(s) is updated as the agent gains experience Encourages to explore new paths An agent has to be transferred to a randomly started position. Random restart would work only if the agent could teleport itself. CSE 471/598 by H. Liu

Summary Heuristics are the key to reducing research costs f(n) = g(n)+h(n) Understand their variants A* is complete, optimal, and optimally efficient among all optimal search algorithms, but ... Iterative improvement algorithms are memory efficient, but ... Local search There is a cost associated with it Online search is different from offline search Mainly for exploration problems CSE 471/598 by H. Liu