Informed Search Include specific knowledge to efficiently conduct the search and find the solution.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Local Search and Optimization Presented by Collin Kanaley.
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Informed search algorithms
Informed search algorithms
Presentation transcript:

Informed Search Include specific knowledge to efficiently conduct the search and find the solution.

Informed Search Methods function BEST-FIRST-SEARCH (problem, EVAL-FN) returns a solution sequence Queuing-Fn  a function that orders nodes by EVAL-FN return GENERAL-SEARCH(problem,Queuing-Fn) end Strategy: function of the order of node expansion  What evaluation functions can we choose? Not really Best-first, can be close with proper EVAL-FN.  EVAL-FN can be incorrect, search goes off in wrong direction.

Informed Search Methods Different evaluation functions lead to different Best-First- Search algorithms Uniform Cost Search: EVAL-FN = g(n) = cost from start node to current node Function BEST-FIRST-SEARCH (problem, EVAL-FN) returns a solution sequence Queuing-Fn  a function that orders nodes by EVAL-FN return GENERAL-SEARCH(problem,Queuing-Fn) end Greedy Search: EVAL-FN = h(n) = estimated cost from current node to goal node A* Search: EVAL-FN = g(n) + h(n)

Best First Search 70

Informed Search Methods Greedy Search Expand the node closest to goal. Must be estimated, cannot be exactly determined.  if we could calculate then we could directly find shortest path to the goal. Heuristic Functions Estimate the cost to the goal from current node. h(n) = estimated cost of cheapest path from current state (node n) to goal node. The EVAL-FN = h(n)

Greedy Search Heuristic: Straight-Line Distance (H SLD )

Fararas Sibiu Greedy Search Apply the Straight Line Distance Heuristic to go from Oradea to Bucharest Fararas Oradea Zerind Sibiu Bucharest The actual path cost based upon distance is: = 461 Is this the shortest distance?

Greedy Search

Rimnicu Vilcea Fararas Sibiu Greedy Search Is this the shortest path between Oradea to Bucharest? Pitesti Fararas Oradea Zerind Sibiu 193 Bucharest No, Rimnicu Vilcea will lead to the shortest distance: = 429

Greedy Search Similarities to Depth-First Search Both tend to follow a single path to the goal.  If Greedy Search encounters a dead end, it will back-up. Is the expansion of dead end nodes a problem? Both are susceptible to false starts, and can get caught in infinite loops. Depending on the heuristic, a large number of extra nodes may be generated.

Properties of Greedy search Is Greedy search Complete? How long does this method take to find a solution? How much memory does this method require? Is this method optimal?

Heuristic Functions What are Heuristic functions? Heuristic: to find or to discover. Cost estimate to the goal from current node.  Not an exact method. Generally only improves average case performance.  Does not improve worst case performance.

A* Search An improved Best-First Search is A* Search. Expanding nodes that have low cost. g(n) : cost so far to reach n from start node. Chose nodes for expansion based on proximity to goal.  h(n): estimated cost to goal from n. The Evaluation function f(n) = g(n) + h(n) Estimated total cost of path from start to goal going through node n.

A* Search function A*-SEARCH (problem) returns a solution or failure return BEST-FIRST-SEARCH(problem, g+h) end

Rimnicu Vilcea Oradea Bucharest A* Search - Oradea to Bucharest A* search generates an optimal solution if the heuristic satisfies certain conditions. nnn – g(n) nnn – h(n) nnn – f(n) Fararas Sibiu Pitesti Rimnicu Vilcea Oradea Sibiu 193 Bucharest = = = = = Zerind AradOradea = =291 Sibiu Craiova =377 Pitesti Craiova = = = = = = = = = = = = = Sibiu = =461 Bucharest = =602 Fararas

A* Search admissible Uses an admissible heuristic i.e., h(n)  h*(n), for all nodes n in state space h*(n) then h(n) is the true cost of going from n to goal state. Admissible heuristics never overestimate the distance to the goal. Examples  Path finding: straight line distance: h SLD  Eight Tiles: # of misplaced tiles A better heuristic:  (Manhattan distance for each tile from its true location) -- this is also an underestimate Can proveA* algorithm is optimal Can prove A* algorithm is optimal It always finds an optimal solution, if a solution exists.

A* Search Desirable properties of the evaluation function. Along any path from start node (root) the path cost f(n) does not decrease. monotonicity If this property is satisfied, the heuristic function satisfies the monotonicity property.  Monotonicity is linked to the consistency property. Consistency: A heuristic h(n) is consistent if, for every node n and every successor n’ of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus the estimated cost of reaching the goal from n’.  If monotonicity is satisfied, then it is an easy intuitive proof to demonstrate the optimality of A*.

Some conceptual questions? Is uniform cost search a form of A* search? Given two heuristics, h 1 (n) < h 2 (n)  h*(n) what can we say about the performance of A 1 * versus A 2 *?

A* Search Theorem: If h is admissible, and there is a path of finite cost from the start node, n 0 to the goal node, A* is guaranteed to terminate with a minimal-cost path to the goal.

A* Search Lemma: At every step before A* terminates, there is always a node, say n *, yet to be expanded with the following properties: 1) n* is an optimal path to goal 2) f(n*)  f(n 0 )  Proof of lemma can be developed by mathematical induction.

A* Search Part of the proof. Suppose some suboptimal goal G 2 has been generated, and is in the queue. Let n be an unexpanded node from shortest path to optimal goal G 1 Start G2G2 n G

Optimality of A* A* expands nodes in order of increasing f value ( f(n) < C * ) This gradually adds f- contours of nodes ( like BFS adds layers ) Contour i has all nodes with f = f i, where f i < f i+1 If B is the goal then when the contour is the goal contour, f(n) = C*. A more accurate heuristic will create bands that stretch towards the goal state and become more narrowly focused around the optimal path.

Properties of A* Is A* an optimal search? What is the time complexity? What are the space requirements? Does A* find the best (highest quality) solution when there is more than one solution? What is the “informed” aspect of A* search?

Memory Bounded Search Iterative deepening A* (IDA*) search Depth first search with an f -cost limit rather than a depth limit.  Each iteration expands all nodes inside the contour of the current f -cost ( f -cost( g + h ))  If solution found, stop.  Else, look over current contour to determine where the next contour lies.

IDA* vs. A* IDA* generates roughly the same number of nodes as A*.  For example, the 8 puzzle problem. but not always!  For example, the traveling salesperson problem.  It is a memory bounded search.

IDA* Performs well when the heuristic has only a few possible values.

IDA* Is IDA* complete? What is the time complexity of IDA*? What is the space complexity of IDA*? Does the method find the best (highest quality) solution when there is more than one solution?

More on Informed Heuristics Some facts about the 8-puzzle Total # arrangements = 9! = 362,880 Typical solution may take 22 steps Exhaustive search to depth 22 would look at 3 22 nodes  3.1 * states.

8-Puzzle Heuristics Two heuristics: h 1 (n) = # of misplaced tiles = 7 h 2 (n) =  (Manhattan distance for each tile from its true location) = = 18

Heuristic Quality Effective Branching Factor: b* The branching factor that a uniform tree of depth d would have to have to contain N + 1 nodes. An A* search of depth 5 using 52 nodes requires a b* of = 1 + b* + ( b* ) 2 + ( b* ) 3 + ( b* ) 4 + ( b* ) 5 53 = Useful for determining the overall usefulness of a heuristic by applying b* to a small set of problems. Well-designed heuristics will have a b* value close to 1.

8-Puzzle Heuristics Returning to the two heuristics: h 1 (n) = # of misplaced tiles = 7 h 2 (n) =  (Manhattan distance for each tile from its true location) = = 18 Typical Search Costs: d = 14: IDS = more than 3,644,035 nodes; A*( h 1 ) = 227 nodes; A *( h 2 ) = 73 nodes d = 24: IDS = too many nodes; A*( h 1 ) = 39,135 nodes; A*( h 2 ) = 1,641 nodes

Heuristics Given h 1 (n) < h 2 (n)  h*(n), h 2 (n) always expands fewer nodes than h 1 (n) because it dominates h 1 (n). h 2 (n) h 1 (n) Recall that an admissible heuristic never overestimates the value to the goal. Therefore, it is always better to use a heuristic [ h 2 (n) ] that dominates another heuristic [ h 1 (n) ]if: h 2 (n) does not overestimate. The computation time for h 2 (n) is not too long.

Heuristics Defining a heuristic depends upon the particular problem domain. If the problem is a game, then the rules. Multiple heuristics can be combined into one heuristic: h(n) = max{h 1 (n) … h x (n)} Inductive learning can also be employed to develop a heuristic.

Sudoku Heuristics Define at least one heuristic to incorporate into an A* search solution for Sudoku.

Local Search Local search is applicable to problems in which the final solution is all that matters, rather than the entire path. For example the Eight-queens problem.  Only care about the final configuration, not how it came about. Search only looks at the local state and only moves to neighbors of that state.  Paths to the current state are not maintained.  Not a systematic search method, the others we have discussed are systematic.

Local Search Advantages Minimal memory usage  Typically a constant amount. Can typically find reasonable solutions for problems with large or continuous state spaces.  Systematic algorithms are not suited to these problems. Very good for optimization problems where want to find the best possible state given the objective function.

State Space Landscape Have a state and an elevation Current State Elevation = heuristic cost, then global minimum Elevation = objective function, then global maximum

Local Search Complete local searches always find a goal if one exists. Optimal algorithms always find a global minimum or maximum.

Hill Climbing Search Hill Climbing Search requires the local search to choose the direction of the increasing value. The search terminates when there is no neighbor with a higher value. Remember, no look ahead. Only know about the immediate neighbors. If there is a set of multiple successors, algorithm randomly chooses from the set. Also called a Greedy Local Search.

Hill Climbing Search Advantage: often can rapidly find a solution. Disadvantages: Easily becomes stuck Local Maxima: A peak that is higher than all it’s neighbors but is lower than the Global Maxima. Ridges: A series of local maxima. Plateaus: An area where the elevation function is flat.  Can resolve by always moving in one particular sideways direction but usually limit the number of consecutive sideways movements. Algorithm success is very dependent upon the shape of the state-space landscape.

Hill Climbing Search function HILL-CLIMBING (problem) returns a state that is a local maximum local variables: current, a node neighbor, a node current  MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor  A highest-valued successor of current if VALUE[neighbor] VALUE[current] then return STATE[current] current  neighbor end

Variations Stochastic hill climbing (SHC) Randomly chooses from the uphill moves. Probability of selection can vary with steepness of move. First-Choice hill climbing Like SHC but randomly generates successors until one is generated that is better than the current state.

Variations Hill climbing algorithms discussed thus far are incomplete. Random-restart hill climbing Multiple hill climbing searches from randomly generated initial states. Will eventually randomly generate the goal. Additional local search algorithms exist Simulated annealing Local beam search