Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Review: Search problem formulation
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CSC344: AI for Games Lecture 4: Informed search
PROBLEM SOLVING BY SEARCHING (2)
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Informed search algorithms
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms
January 31, 2006AI: Chapter 4: Informed Search and Exploration 1 Artificial Intelligence Chapter 4: Informed Search and Exploration Michael Scherger Department.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Heuristic Search Foundations of Artificial Intelligence.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Informed Search Uninformed searches Informed searches easy
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Informed Search and Exploration
Artificial Intelligence Problem solving by searching CSC 361
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Informed search algorithms
Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Heuristic Search Generate and Test Hill Climbing Best First Search
Artificial Intelligence
Midterm Review.
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search

Eick: Informed Search Introduction Informed search strategies:  Use problem specific knowledge  Can find solutions more efficiently than search strategies that do not use domain specific knowledge.

Eick: Informed Search Greedy Best-First Search  Expand node with lowest evaluation function f(n)  Function f(n) estimates the distance to the goal. Simplest case: f(n) = h(n) estimates cost of cheapest path from node n to the goal. ** HEURISTIC FUNCTION **

Eick: Informed Search Figure 4.2

Eick: Informed Search Best-First Search  Resembles depth-first search  Follows the most promising path  Non-optimal  Incomplete

Eick: Informed Search Best First Search Inputs: state evaluation function f, initial state, goal state test f can be any state evaluation function Output: fail or path from goal state to initial state Data Structures Open: contains states that have not been expanded Close: contains already visited states Expands the best state according to f until either Open is empty or a goal state have been found or the algorithm runs out of time or storage. Many versions and variations of Best First Search exist with A* being the most popular one---Greedy best first search is another specific version.

Eick: Informed Search Best-First-Search Pseudo-Code Best-First-Search(Start, Goal_test, f) insert(Start,Open); repeat forever { if (empty(Open)) then return ‘fail’; remove ‘best’ Node from Open using f; insert(Node,Close) if (Goal_test(Node)) then return Path Node to Initial State; for each Child of Node do { if (Child not in Close) then { Child.previous := Node; Insert(Child,Open ) }} }

Eick: Informed Search A* Search Uses Graph-Search with Evaluation Function: F(n) = g(n) + h(n) Path cost from root to node n Estimated cost of cheapest path from node n to goal

Eick: Informed Search Figure 4.3

Eick: Informed Search A* Search; optimal Optimal. Optimal if admissible heuristic admissible: Never overestimates the cost to reach a goal from a given state.

Eick: Informed Search A* Search; optimal Assume optimal solution cost is C* Suboptimal node G2 appears on the fringe: f(G2) = g(G2) + h(G2) > C* Assume node n is an optimal path: f(n) = g(n) + h(n) <= C* Therefore f(n) < C* < f(G2)

Eick: Informed Search A* Search; complete A* is complete. A* builds search “bands” of increasing f(n) At all points f(n) < C* Eventually we reach the “goal contour” Optimally efficient Most times exponential growth occurs

Eick: Informed Search Contours of equal f-cost

Eick: Informed Search Data Structures of Expansion Search Search Graph:= discussed earlier Open-list:= set of states to be expanded Close-list:= set of states that have already been expanded; many implementation do not use close-list (e.g. the version of expansion search in our textbook)  potential overhead through looping but saves a lot of storage

Eick: Informed Search Problem: Expansion Search Algorithms Frequently Run Out of Space Possible Solutions: Restrict the search space; e.g. introduce a depth bound Limit the number of states in the open list Local Beam Search Use a maximal number of elements for the open-list and discard states whose f-value is the highest. SMA* and MA* combine the previous idea and other ideas RBFS (mimics depth-first search, but backtracks if the current path is not promising and a better path exist; advantage: limited size of open list, disadvantage: excessive node regeneration) IDA* (iterative deepening, cutoff value is the smallest f-cost of any node that is greater than the cutoff of the previous iteration)

Eick: Informed Search Local beam search i. Keep track of best k states ii. Generate all successors of these k states iii. If goal is reached stop. iv. If not, select the best k successors and repeat.

Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms

Eick: Informed Search 8-Puzzle Common candidates: F1: Number of misplaced tiles F2: Manhattan distance from each tile to its goal position.

Eick: Informed Search Figure

Eick: Informed Search How to Obtain Heuristics? Ask the domain expert (if there is one) Solve example problems and generalize your experience on which operators are helpful in which situation (particularly important for state space search) Try to develop sophisticated evaluation functions that measure the closeness of a state to a goal state (particularly important for state space search) Run your search algorithm with different parameter settings trying to determine which parameter settings of the chosen search algorithm are “good” to solve a particular class of problems. Write a program that selects “good parameter” settings based on problem characteristics (frequently very difficult) relying on machine learning

Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms

Eick: Informed Search Local Search Algorithms  If the path does not matter we can deal with local search.  They use a single current state  Move only to neighbors of that state Advantages: Use very little memory Often find reasonable solutions

Eick: Informed Search Local Search Algorithms  It helps to see a state space landscape  Find global maximum or minimum Complete search: Always finds a goal Optimal search: Finds global maximum or minimum.

Eick: Informed Search Figure 4.10

Eick: Informed Search Hill Climbing Moves in the direction of increasing value. Terminates when it reaches a “peak”. Does not look beyond immediate neighbors (variant: use definition of a neighborhood  later ).

Eick: Informed Search Hill Climbing Can get stuck for some reasons: a.Local maxima b.Ridges c.Plateau Variants: stochastic hill climbing  more details later  random uphill move  generate successors randomly until one is better  run hill climbing multiple times using different initial states (random restart)

Eick: Informed Search Simulated Annealing o Instead of picking the best move, pick randomly o If improvement is observed, take the step o Otherwise accept the move with some probability o Probability decreases with “badness” of step o And also decreases as the temperature goes down

Eick: Informed Search

Example of a Schedule Simulated Annealling T=f(t)= (2000-t)/ runs for 2000 iterations Assume E=-1; then we obtain t=0  downward move will be accepted with probability e -1 t=1000  downward move will be accepted with probability e -2 t=1500  downward move will be accepted with probability e -4 t=1999  downward move will be accepted with probability e Remark: if E=-2 downward moves are less likely to be accepted than when E=-1 (e.g. for t=1000 a downward move would be accepted with probability e -4)

Eick: Informed Search