Presentation is loading. Please wait.

Presentation is loading. Please wait.

Informed Search Include specific knowledge to efficiently conduct the search and find the solution.

Similar presentations


Presentation on theme: "Informed Search Include specific knowledge to efficiently conduct the search and find the solution."— Presentation transcript:

1 Informed Search Include specific knowledge to efficiently conduct the search and find the solution.

2 Informed Search Methods function BEST-FIRST-SEARCH (problem, EVAL-FN) returns a solution sequence Queuing-Fn  a function that orders nodes by EVAL-FN return GENERAL-SEARCH(problem,Queuing-Fn) end Strategy: function of the order of node expansion  What evaluation functions can we choose? Not really Best-first, can be close with proper EVAL-FN.  EVAL-FN can be incorrect, search goes off in wrong direction.

3 Informed Search Methods Different evaluation functions lead to different Best-First- Search algorithms Uniform Cost Search: EVAL-FN = g(n) = cost from start node to current node Function BEST-FIRST-SEARCH (problem, EVAL-FN) returns a solution sequence Queuing-Fn  a function that orders nodes by EVAL-FN return GENERAL-SEARCH(problem,Queuing-Fn) end Greedy Search: EVAL-FN = h(n) = estimated cost from current node to goal node A* Search: EVAL-FN = g(n) + h(n)

4 Best First Search 70

5 Informed Search Methods Greedy Search Expand the node closest to goal. Must be estimated, cannot be exactly determined.  if we could calculate then we could directly find shortest path to the goal. Heuristic Functions Estimate the cost to the goal from current node. h(n) = estimated cost of cheapest path from current state (node n) to goal node. The EVAL-FN = h(n)

6 Greedy Search Heuristic: Straight-Line Distance (H SLD )

7 Fararas Sibiu Greedy Search Apply the Straight Line Distance Heuristic to go from Oradea to Bucharest Fararas 380 253 374 Oradea Zerind Sibiu Bucharest 151 99 211 176 0 The actual path cost based upon distance is: 151+99+211 = 461 Is this the shortest distance?

8 Greedy Search

9 Rimnicu Vilcea Fararas Sibiu Greedy Search Is this the shortest path between Oradea to Bucharest? Pitesti Fararas 380 253 374 Oradea Zerind Sibiu 193 Bucharest 151 80 99 97 211 101 176 0 100 No, Rimnicu Vilcea will lead to the shortest distance: 151+80+97+101 = 429

10 Greedy Search Similarities to Depth-First Search Both tend to follow a single path to the goal.  If Greedy Search encounters a dead end, it will back-up. Is the expansion of dead end nodes a problem? Both are susceptible to false starts, and can get caught in infinite loops. Depending on the heuristic, a large number of extra nodes may be generated.

11 Properties of Greedy search Is Greedy search Complete? How long does this method take to find a solution? How much memory does this method require? Is this method optimal?

12 Heuristic Functions What are Heuristic functions? Heuristic: to find or to discover. Cost estimate to the goal from current node.  Not an exact method. Generally only improves average case performance.  Does not improve worst case performance.

13 A* Search An improved Best-First Search is A* Search. Expanding nodes that have low cost. g(n) : cost so far to reach n from start node. Chose nodes for expansion based on proximity to goal.  h(n): estimated cost to goal from n. The Evaluation function f(n) = g(n) + h(n) Estimated total cost of path from start to goal going through node n.

14 A* Search function A*-SEARCH (problem) returns a solution or failure return BEST-FIRST-SEARCH(problem, g+h) end

15 Rimnicu Vilcea Oradea Bucharest A* Search - Oradea to Bucharest A* search generates an optimal solution if the heuristic satisfies certain conditions. nnn – g(n) nnn – h(n) nnn – f(n) Fararas Sibiu Pitesti Rimnicu Vilcea 380 253 374 Oradea Sibiu 193 Bucharest 151 151+80=231 151+99=250 231+80=311 231+97=328 328+101=429 176 0 100 Zerind AradOradea 71 151+151=302 151+140=291 Sibiu Craiova 160 231+146=377 Pitesti Craiova 328+138=466 366 380 253 160 71+374=445 151+253=404 291+366=657 302+380=682 231+193=424 250+176=426 311+253=564 328+100=428 377+160=537 466+160=626 429+0=429 0+380=380 445 657 682 380 404 424 426 564 428 537 Sibiu 250+99=349 250+211=461 Bucharest 0 253 461+0=461 349+253=602 Fararas 602 461

16 A* Search admissible Uses an admissible heuristic i.e., h(n)  h*(n), for all nodes n in state space h*(n) then h(n) is the true cost of going from n to goal state. Admissible heuristics never overestimate the distance to the goal. Examples  Path finding: straight line distance: h SLD  Eight Tiles: # of misplaced tiles A better heuristic:  (Manhattan distance for each tile from its true location) -- this is also an underestimate Can proveA* algorithm is optimal Can prove A* algorithm is optimal It always finds an optimal solution, if a solution exists.

17 A* Search Desirable properties of the evaluation function. Along any path from start node (root) the path cost f(n) does not decrease. monotonicity If this property is satisfied, the heuristic function satisfies the monotonicity property.  Monotonicity is linked to the consistency property. Consistency: A heuristic h(n) is consistent if, for every node n and every successor n’ of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus the estimated cost of reaching the goal from n’.  If monotonicity is satisfied, then it is an easy intuitive proof to demonstrate the optimality of A*.

18 Some conceptual questions? Is uniform cost search a form of A* search? Given two heuristics, h 1 (n) < h 2 (n)  h*(n) what can we say about the performance of A 1 * versus A 2 *?

19 A* Search Theorem: If h is admissible, and there is a path of finite cost from the start node, n 0 to the goal node, A* is guaranteed to terminate with a minimal-cost path to the goal.

20 A* Search Lemma: At every step before A* terminates, there is always a node, say n *, yet to be expanded with the following properties: 1) n* is an optimal path to goal 2) f(n*)  f(n 0 )  Proof of lemma can be developed by mathematical induction.

21 A* Search Part of the proof. Suppose some suboptimal goal G 2 has been generated, and is in the queue. Let n be an unexpanded node from shortest path to optimal goal G 1 Start G2G2 n G

22 Optimality of A* A* expands nodes in order of increasing f value ( f(n) < C * ) This gradually adds f- contours of nodes ( like BFS adds layers ) Contour i has all nodes with f = f i, where f i < f i+1 If B is the goal then when the contour is the goal contour, f(n) = C*. A more accurate heuristic will create bands that stretch towards the goal state and become more narrowly focused around the optimal path.

23 Properties of A* Is A* an optimal search? What is the time complexity? What are the space requirements? Does A* find the best (highest quality) solution when there is more than one solution? What is the “informed” aspect of A* search?

24 Memory Bounded Search Iterative deepening A* (IDA*) search Depth first search with an f -cost limit rather than a depth limit.  Each iteration expands all nodes inside the contour of the current f -cost ( f -cost( g + h ))  If solution found, stop.  Else, look over current contour to determine where the next contour lies.

25 IDA* vs. A* IDA* generates roughly the same number of nodes as A*.  For example, the 8 puzzle problem. but not always!  For example, the traveling salesperson problem.  It is a memory bounded search.

26

27 IDA* Performs well when the heuristic has only a few possible values.

28 IDA* Is IDA* complete? What is the time complexity of IDA*? What is the space complexity of IDA*? Does the method find the best (highest quality) solution when there is more than one solution?

29 More on Informed Heuristics Some facts about the 8-puzzle Total # arrangements = 9! = 362,880 Typical solution may take 22 steps Exhaustive search to depth 22 would look at 3 22 nodes  3.1 * 10 10 states.

30 8-Puzzle Heuristics Two heuristics: h 1 (n) = # of misplaced tiles = 7 h 2 (n) =  (Manhattan distance for each tile from its true location) = 2+3+3+2+4+2+0+2 = 18

31 Heuristic Quality Effective Branching Factor: b* The branching factor that a uniform tree of depth d would have to have to contain N + 1 nodes. An A* search of depth 5 using 52 nodes requires a b* of 1.92. 52 + 1 = 1 + b* + ( b* ) 2 + ( b* ) 3 + ( b* ) 4 + ( b* ) 5 53 = 1 + 1.92 + 3.68 + 7.07 + 13.58 + 26.09 Useful for determining the overall usefulness of a heuristic by applying b* to a small set of problems. Well-designed heuristics will have a b* value close to 1.

32 8-Puzzle Heuristics Returning to the two heuristics: h 1 (n) = # of misplaced tiles = 7 h 2 (n) =  (Manhattan distance for each tile from its true location) = 2+3+3+2+4+2+0+2 = 18 Typical Search Costs: d = 14: IDS = more than 3,644,035 nodes; A*( h 1 ) = 227 nodes; A *( h 2 ) = 73 nodes d = 24: IDS = too many nodes; A*( h 1 ) = 39,135 nodes; A*( h 2 ) = 1,641 nodes

33 Heuristics Given h 1 (n) < h 2 (n)  h*(n), h 2 (n) always expands fewer nodes than h 1 (n) because it dominates h 1 (n). h 2 (n) h 1 (n) Recall that an admissible heuristic never overestimates the value to the goal. Therefore, it is always better to use a heuristic [ h 2 (n) ] that dominates another heuristic [ h 1 (n) ]if: h 2 (n) does not overestimate. The computation time for h 2 (n) is not too long.

34 Heuristics Defining a heuristic depends upon the particular problem domain. If the problem is a game, then the rules. Multiple heuristics can be combined into one heuristic: h(n) = max{h 1 (n) … h x (n)} Inductive learning can also be employed to develop a heuristic.

35 Sudoku Heuristics Define at least one heuristic to incorporate into an A* search solution for Sudoku.

36 Local Search Local search is applicable to problems in which the final solution is all that matters, rather than the entire path. For example the Eight-queens problem.  Only care about the final configuration, not how it came about. Search only looks at the local state and only moves to neighbors of that state.  Paths to the current state are not maintained.  Not a systematic search method, the others we have discussed are systematic.

37 Local Search Advantages Minimal memory usage  Typically a constant amount. Can typically find reasonable solutions for problems with large or continuous state spaces.  Systematic algorithms are not suited to these problems. Very good for optimization problems where want to find the best possible state given the objective function.

38 State Space Landscape Have a state and an elevation Current State Elevation = heuristic cost, then global minimum Elevation = objective function, then global maximum

39 Local Search Complete local searches always find a goal if one exists. Optimal algorithms always find a global minimum or maximum.

40 Hill Climbing Search Hill Climbing Search requires the local search to choose the direction of the increasing value. The search terminates when there is no neighbor with a higher value. Remember, no look ahead. Only know about the immediate neighbors. If there is a set of multiple successors, algorithm randomly chooses from the set. Also called a Greedy Local Search.

41 Hill Climbing Search Advantage: often can rapidly find a solution. Disadvantages: Easily becomes stuck Local Maxima: A peak that is higher than all it’s neighbors but is lower than the Global Maxima. Ridges: A series of local maxima. Plateaus: An area where the elevation function is flat.  Can resolve by always moving in one particular sideways direction but usually limit the number of consecutive sideways movements. Algorithm success is very dependent upon the shape of the state-space landscape.

42 Hill Climbing Search function HILL-CLIMBING (problem) returns a state that is a local maximum local variables: current, a node neighbor, a node current  MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor  A highest-valued successor of current if VALUE[neighbor] VALUE[current] then return STATE[current] current  neighbor end

43 Variations Stochastic hill climbing (SHC) Randomly chooses from the uphill moves. Probability of selection can vary with steepness of move. First-Choice hill climbing Like SHC but randomly generates successors until one is generated that is better than the current state.

44 Variations Hill climbing algorithms discussed thus far are incomplete. Random-restart hill climbing Multiple hill climbing searches from randomly generated initial states. Will eventually randomly generate the goal. Additional local search algorithms exist Simulated annealing Local beam search


Download ppt "Informed Search Include specific knowledge to efficiently conduct the search and find the solution."

Similar presentations


Ads by Google