Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 HEURISTIC SEARCH ( INFORMED SEARCH). 2  From Greek heuriskein, “to find”.  Of or relating to a usually speculative formulation serving as a guide.

Similar presentations


Presentation on theme: "1 HEURISTIC SEARCH ( INFORMED SEARCH). 2  From Greek heuriskein, “to find”.  Of or relating to a usually speculative formulation serving as a guide."— Presentation transcript:

1 1 HEURISTIC SEARCH ( INFORMED SEARCH)

2 2  From Greek heuriskein, “to find”.  Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem.  Computer Science: Relating to or using a problem-solving technique in which the most appropriate solution, of several found by alternative methods, is selected at successive stages of a program for use in the next step of the program. HEURISTICS

3 3  the study of methods and rules of discovery and invention.  In state space search, heuristics are formalized as rules for choosing those branches in a state space that are most likely to lead to an acceptable problem solution. HEURISTICS

4 4 Introduction: Al problem solvers employ heuristics in two situations:- –First: A problem may not have an exact solution, because of inherent ambiguities in the problem statement or available data. Medical diagnosis is an example of this. A given set of symptoms may have several possible causes. Doctors use heuristics to chose the most likely diagnosis and formulate a plan of treatment. HEURISTIC SEARCH

5 5 Introduction : Al problem solvers employ heuristics in two situations:- –First: A problem may not have an exact solution, because of inherent ambiguities in the problem statement or available data. Vision is another example of an inherently inexact problem. Visual scenes are often ambiguous, allowing multiple interpretations of the connectedness, extent and orientation of objects. HEURISTIC SEARCH

6 6 Introduction: Al problem solvers employ heuristics in two situations:- –First: A problem may not have an exact solution, because of inherent ambiguities in the problem statement or available data. Optical illusions exemplify these ambiguities. Vision systems use heuristics to select the most likely of several possible interpretations of a given scene. HEURISTIC SEARCH

7 7 Introduction: Al problem solvers employ heuristics in two situations:- –Second: A problem may have an exact solution, but the computational cost of finding it may be prohibitive. In many problems, state space growth is combinatorially explosive, with the number of possible states increasing exponentially or factorially with the depth of the search. HEURISTIC SEARCH

8 8 Introduction: Al problem solvers employ heuristics in two situations:- –Second: A problem may have an exact solution, but the computational cost of finding it may be prohibitive. Heuristic search handles above problem by guiding the search along the most “promising” path through the space. HEURISTIC SEARCH

9 9 Introduction: Al problem solvers employ heuristics in two situations:- –Second: A problem may have an exact solution, but the computational cost of finding it may be prohibitive. By eliminating unpromising states and their descendants from consideration, a heuristic algorithm can defeat this combinatorial explosion and find an acceptable solution. HEURISTIC SEARCH

10 10 Introduction: Heuristics and the design of algorithms to implement heuristic search have been an important part of artificial intelligence research. Game playing and theorem proving require heuristics to reduce search space to simplify the solution finding. HEURISTIC SEARCH

11 11 Introduction: It is not feasible to examine every inference that can be made in search space of reasoning or every possible move in a board game to reach to a solution. In this case, heuristic search provides a practical answer. HEURISTIC SEARCH

12 12 Introduction: Heuristics are fallible. –A heuristics is only an informed guess of the next step to be taken in solving a problem. It is often based on experience or intuition. –Heuristics use limited information, such as the descriptions of the states currently on the OPEN List. HEURISTIC SEARCH

13 13 Introduction: Heuristics are fallible. –Heuristics are seldom able to predict the exact behavior of the state space farther along in the search. –A heuristics can lead a search algorithm to a sub optimal solutions. –At times a heuristic method may fail to find any solution at all. HEURISTIC SEARCH

14 14 Tic-Tac-Toe The combinatorics for exhaustive search are high. –Each of the nine first moves has eight possible responses. –which in turn have seven continuing moves, and so on. –As simple analysis puts exhaustive search at 9 x 8 x7x … or 9!.

15 15 Tic-Tac-Toe Symmetry reduction can decrease the search space, there are really only three initial moves. –To a corner. –To the center of a side. –To the center of the grid.

16 16 Tic-Tac-Toe Symmetry reductions on the second level of states further reduce the number of possible paths through the space. In following figure the search space is smaller then the original space.

17 17 Tic-Tac-Toe Most Win Heuristics –Algorithm analys the moves in which X has the most winning lines. –Algorithm then selects and moves to the state with the highest heuristic value i.e. X takes the center of the grid.

18 18 Tic-Tac-Toe Most Win Heuristics –Other alternatives and their descendants are eliminated. –Approximately two-thirds of the space is pruned away with the first move.

19 19 Tic-Tac-Toe Most Win Heuristics –After the first move, the opponent can choose either of two alternative moves.

20 20 Most Win Heuristics –After the move of opponent, The heuristics can be applied to the resulting state of the game. –As search continues, each move evaluates the children of a single node: exhaustive search is not required. –Figure shows the reduced search after three steps in the games (States are marked with their heuristics values). Tic-Tac-Toe

21 21 Tic-Tac-Toe COMPLEXITY –It is difficult to compute the exact number of states that must be examined. However, a crude upper bound can be computed by assuming a maximum of nine moves in a game and eight children per move.

22 22 Tic-Tac-Toe COMPLEXITY –In reality: The number of states will be smaller, as board fills and reduces our options. In addition opponent is responsible for half the moves. Even this crude upper bound of 8 x 9 or 72 states is an improvement of four orders of magnitude over 9!.

23 23 ALGORITHM FOR HEURISTICS SEARCH Best-First-Search Best first search uses lists to maintain states: –OPEN LIST to keep track of the current fringe of the search. –CLOSED LIST to record states already visited. Algorithm orders the states on OPEN according to some heuristics estimate of their closeness to a goal.

24 24 ALGORITHM FOR HEURISTICS SEARCH Best-First-Search Each iteration of the loop consider the most promising state on the OPEN list. Algorithm sorts and rearrange OPEN in precedence of lowest heuristics value.

25 25 BEST-FIRST-SEARCH

26 26 At each iteration best-first–search removes the first element from the OPEN list. If it meets the goal conditions, the algorithm returns the solution path that led to the goal. Each state retains ancestor information to determine if it had previously been reached by a shorter path and to allow the algorithm to return the final solution path. If the first element on OPEN is not a goal, the algorithm generate its descendants. BEST-FIRST-SEARCH

27 27 If a child state is already on OPEN or CLOSED, the algorithm checks to make sure that the state records the shorter of the two partial solution paths. Duplicate states are not retained. By updating the ancestor history of nodes on OPEN and CLOSED when they are rediscovered, the algorithm is more likely to find a shorter path to a goal. BEST-FIRST-SEARCH

28 28 Best-first-search applies a heuristic evaluation to the states on OPEN, and the list is sorted accordingly to the heuristic values of those states. This brings the best states to the front of OPEN. Because these estimates are heuristic in nature, the next state to be examined may be from any level of the state space. When OPEN is maintained as a sorted list, it is referred to as a priority queue. BEST-FIRST-SEARCH

29 29 BEST-FIRST-SEARCH A HYPOTHETICAL SEARCH SPACE

30 30 BEST-FIRST-SEARCH TRACES OF OPEN AND CLOSED LISTS

31 31 Heuristic search of a hypothetical state space with OPEN and CLOSED states highlighted. BEST-FIRST-SEARCH

32 32 The goal of best–first search is to find the goal state by looking at as few states as possible: the more informed the heuristic, the fewer states are processed in finding the goal. The best-first search algorithm always selects the most promising states on OPEN for further expansion. It does not abandon all the other states but maintains them on OPEN. BEST-FIRST-SEARCH

33 33 In the event when a heuristic leads the search down a path that proves incorrect, the algorithm will eventually retrieve some previously generated next best state from OPEN and shift its focus to another part of the space. In the example after the children of state B were found to have poor heuristic evaluations, the search shifted its focus to state C. In best-first-search the OPEN list allows backtracking from paths that fail to produce a goal. BEST-FIRST-SEARCH

34 34 IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS Performance of various heuristics is evaluated for solving the 8-puzzle. Figure shows a start and goal state for the 8-puzzle, along with the first three states generated in the search.

35 35 Heuristic 1 –The simplest heuristic, counts the tiles out of place in each state when it is compared with the goal. –The state that had fewest tiles out of place is probably closer to the desired goal and would be the best to examine next. –However, this heuristic does not use all of the information available in a board configuration, because it does not take into account the distance the tiles must be moved. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

36 36 Heuristic 2 –A “better” heuristic would sum all the distances by which the tiles are out of place, one for each square a tile must be moved to reach its position in the goal state. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

37 37 Heuristic 3 –Above two heuristics fail to take into account the problem of tile reversals. –If two tiles are next to each other and the goal requires their being in opposite locations, it takes more than two moves to put them back to place, as the tiles must “go around” each other. –A heuristic that takes this into account multiplies a small number (2, for example) times each direct title reversal. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

38 38 IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

39 39 The “sum of distances” heuristic provides a more accurate estimate of the work to be done than the “number of titles out of place” heuristic. Title reversal heuristic gives out an evaluation of ‘0’ since non of these states have any direct tile reversals. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

40 40 Good Heuristics –Good heuristics are difficult to devise. Judgment and intuition help, but the final measure of a heuristic must be its actual performance on problem instances. –Each heuristic proposed above ignores some critical information and needs improvement. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

41 41 Devising A Good Heuristic –The distance from the starting state to its descendants can be measured by maintaining a depth count for each state. This count is ‘0’ for the beginning state and may be incremented by 1 for each level of the search. –It records the actual number of moves that have been used to go from the starting state in the search space to each descendant. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

42 42 Devising A Good Heuristic –Let evaluation function f(n), be the sum of two components: f(n) = g(n) + h(n) –Where: g(n) measures the actual length of the path from start state to any state n. h(n) is a heuristic estimate of the distance from the state n to a goal. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

43 43 h(n) = 5h(n) = 3h(n) = 5 IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

44 44 Each state is labeled with a letter and its heuristic weight, f(n) = g(n) + h(n). The number at the top of each state indicates the order in which it was taken off the OPEN list.

45 45 Successive stages of OPEN and CLOSED that generates the graph are:

46 46

47 47 The g(n) component of the evaluation function gives the search more of a breadth-first flavor. This prevent it from being misled by an erroneous evaluation, If a heuristic continuously returns “good” evaluations for states along a path that fails to reach a goal, the g value will grow to dominate h and force search back to a shorter solution path. This guarantees that the algorithm will not become permanently lost, descending an infinite branch. IMPLEMENTING HEURISTIC EVALUATION FUNCTIONS

48 48 ADMISSIBILITY MEASURES A search algorithm is admissible if it is guaranteed to find a minimal path to solution, whenever such a path exists. Breadth-first search is an admissible search strategy. In determining the properties of admissible heuristics, we define an evaluation function f*. f*(n) = g*(n) + h*(n) Let:f(n) = g(n) + h(n) g(n) measures the depth(cost) at which state n has been found in the graph. h(n) is the heuristic estimate of the cost from n to a goal. f(n) is the estimate of total cost of the path from the start state through node n to the goal state. Let:f*(n) = g*(n) + h*(n) g*(n) is the cost of the shortest path from the start node to node n h*(n) returns the actual cost of the shortest path from n to the goal. f*(n) is the actual cost of the optimal path from a start node through node n to a goal node.

49 49 ADMISSIBILITY MEASURES In an algorithm A, Ideally function f should be a close estimate of f*. –The cost of the current path, g(n) to state n, is a reasonable estimate of g*(n). Where: g(n)  g*(n). –These are equal only if the search has discovered the optimal path to state n. Similarly, compare h*(n) with h(n). h(n) is bounded from above by h*(n) i.e, h(n) is always less than or equal to the actual cost of a minimal path h*(n) i.e. h(n)  h*(n). If algorithm A uses an evaluation function f in which h(n)  h*(n), the algorithm is called algorithm A*.

50 50 The heuristic of counting the number of tiles out of place is certainly less than or equal to the number of moves required to move them to their goal position. Thus this heuristic is admissible and guarantees an optimal solution path. h*(n)h(n)

51 51 Comparison of search using heuristics with search by breadth- first search. Heuristic used is f(n) = g(n) + h(n) where h(n) is tiles out of place. The portion of the graph searched heuristically is shaded. The optimal solution path is in bold.

52 52 If breadth first search is considered a A* algorithm with heuristic h 1, then it must be less than h* (as breadth first search is admissible). Also number of tiles out of place with respect to goal state is a heuristic h 2. As this is also admissible therefore, h 2 is less than h *.

53 53 In this case h 1  h 2  h *. It follows that the number of tiles out of place heuristic is more informed than breadth-first search. Both h 1 and h 2 find the optimal path, but h 2 evaluates many fewer states in the process. Similarly the heuristic that calculates the sum of the direct distances by which all the tiles are out of place is again more informed than the calculation of the number of tiles out of place with respect to the goal state.

54 54 If a heuristic h 2 is more informed then h1 then the set of states examined by h 2 is a subset of those examined by h 1. In general, more informed an A* algorithm the less of the space it needs to expand to get the optimal solution.

55 55 OPTIMIZATION IN SEARCH Optimization Problems:Optimization Problems: –Trying to find a schedule for flights to minimize congestion at an airport. –To find a lowest cost route in case of Traveling Sales Person Problem. Typically in most optimization problems the branching factor is very large. Best First Search become inefficient and slows down because it has to store so many nodes to keep track of what nodes to explore next.

56 56 OPTIMIZATION IN SEARCH We need to find a solution to go round the problem. Use algorithms which do not store nodes which have earlier been rejected because they did not meet heuristic merit.

57 57 HILL CLIMBING SEARCH A variation of Best First Search. Uses heuristic evaluation function for comparison of states. Do not store nodes which earlier have been rejected on the basis of heuristic merit. Two types: –Simple Hill Climbing Search. –Steepest Ascent Hill Climbing Search.

58 58 SIMPLE HILL CLIMBING Use heuristic to move to states that are better than the current state. Always move to better state when possible. The process ends when all operators have been applied and none of the resulting states are better than the current state.

59 59 SIMPLE HILL CLIMBING Heuristic Merit: Heuristic Merit: A state having larger heuristic evaluation is selected. 35? 4 ?7 645 Local Maximum ?? ?Goal ?? ?? ? 3

60 60 Potential Problems Search terminates when a local maximum is found. The order of application of operators can make a difference. Can’t see past a single move in the state space. SIMPLE HILL CLIMBING

61 61 Both simple and steepest ascent hill climbing may fail to find a solution. Either algorithm may terminate not by finding a goal state but by getting to a state from which no better states can be generated. This happens if the program has reached: –A local maximum. –A plateau. –Or a ridge. HILL CLIMBING TERMINATION

62 62 Local MaximumLocal Maximum: all neighboring states are worse. PlateauPlateau - all neighboring states are the same as the current state. RidgeRidge - local maximum that is caused by inability to apply 2 operators at once. 645 10 999 9 687 9 1512 HILL CLIMBING TERMINATION

63 63 Techniques for Dealing With the Problems of Local Maximum, Plateau and Ridge. Backtrack and try going in some other direction. To implement this strategy maintain a list of paths. Particularly suitable for Local maximum Problems. Make a big jump in some other direction in order to try to get to a new section of the search space. If the rules available describe single small steps, apply them several times in the same direction. Particularly suitable for dealing with Plateaus. Apply two or more rules before comparing heuristic evolutions. Particularly good strategy for dealing with ridges. HILL CLIMBING TERMINATION

64 64 Hill climbing is intrinsically a local method i.e. it decides what to do next by looking only at the immediate consequences of its choice. The heuristic used by a hill climbing algorithm does not need to be a static function of a single state. Therefore information in the global context may be encoded in the heuristic function. So that the heuristic can look ahead many states. HILL CLIMBING

65 65 Main advantage of Hill Climbing Search methods is that it is less combinatorially explosive. Main disadvantage is the lack of guarantee that it will be effective. HILL CLIMBING

66 66 ADVERSARY SEARCH

67 67 MINIMAX PROCEDURE

68 68 HEURISTICS IN GAMES The Minimax Procedure Applicable to two person games. Such games are complicated to program because of a “hostile and an unpredictable” opponent. The problem is characterized as systematically searching the space of:- –Own possible moves. –And countermoves by the opponent. Because of enormity of the search space, the game player must use heuristics to guide play along a path to a winning state.

69 69 MINIMAX PROCEDURE A Game Called NIM State space can be exhaustively searched. A pile of token is placed on table. At each move the player must divide the pile into piles of different sizes. New piles must not hold equal number of tokens. The player who no longer makes a legal move losses the game. Figure: A space for a game with 7 tokens.

70 70 MINIMAX PROCEDURE A Game Called NIM Main difficulty is the accounting for the actions of the opponent. The opponents in a game are referred to as the MIN and MAX. Max: A player trying to win. Min: A player trying to minimize the score of MAX. Each level in the search space is labeled according to whose move it is at that point in the game. Each leaf node is given a value I or 0 (win for MAX or win for MIN)

71 71 MINIMAX PROCEDURE A Game Called NIM MINIMAX propagates I or 0 up the graph through parent nodes according to rules: Parent state is MAX: Give the parent maximum value of its children. Parent state is MIN: Give the parent minimum value of its children. These derived values are then used to choose among possible moves.

72 72 MINIMAX PROCEDURE Minimaxing to Fixed Ply Depth N-Ply Look Ahead: For games having large state space, exhaustive search is not possible. In such games state space is searched by levels or ply. Game playing programs typically look ahead a fixed ply depth. The states on that ply are measured heuristically and the values are propagated back up the graph using MINIMAX. The search algorithm then uses these derived values to select among possible moves.

73 73 MINIMAX PROCEDURE Fixed Depth MINIMAX Evaluations are assigned to each state on the ply. These values are then propagated up to each parent node. If the parent is on the MIN level, then the minimum value of the children is backed up. MINIMAX with a 4-ply look ahead

74 74 MINIMAX PROCEDURE Fixed Depth MINIMAX If the parent is on the MAX level, then the maximum value of the children is backed up. This way the values backed up the graph to the children of the current state. Values are then used by current state to select among its children. MINIMAX with a 4-ply look ahead

75 75 MINIMAX PROCEDURE Nature of Heuristics in Game Playing: Each player attempts to overcome the other. Heuristics can measure the advantage of one player over another. In Chess: –A Simple Heuristic: Piece advantage is important, so a simple heuristic takes the difference in the number of pieces belonging to MAX and MIN –Then it tries to maximize the difference between these piece measures.

76 76 MINIMAX PROCEDURE Nature of Heuristics in Game Playing: Each player attempts to overcome the other. Heuristics can measure the advantage of one player over another. In Chess: –A Sophisticated Heuristic: Assign different values to the pieces: Depending on their values (Queen, Pawn or King). Depending on their locations on the board.

77 77 HEURISTIC FOR TIC-TAC-TOE Two Ply MINIMAX Heuristic attempts to measure the conflict in the game. Counts all winning lines open to MAX. Subtract the winning lines available to MIN. Attempts to maximize this difference though search.

78 78 TWO-PLY MINIMAX: APPLIED TO THE OPENING MOVE HEURISTIC FOR TIC-TAC-TOE MAX MIN MAX

79 79 TWO-PLY MINIMAX: ONE OF TWO POSSIBLE MAX SECOND MOVES HEURISTIC FOR TIC-TAC-TOE MAX MIN MAX

80 80 TWO-PLY MINIMAX: Applied to MAX move near the end of the game HEURISTIC FOR TIC-TAC-TOE +  : A forced win for MAX -  : A forced win for MIN MAXMIN MAX


Download ppt "1 HEURISTIC SEARCH ( INFORMED SEARCH). 2  From Greek heuriskein, “to find”.  Of or relating to a usually speculative formulation serving as a guide."

Similar presentations


Ads by Google