Download presentation
Presentation is loading. Please wait.
1
SEARCH TECHNIQUES
2
SEARCH TECHNIQUES Search techniques Blind Heuristic Depth first Search
( DFS ) Breadth first Search ( BFS ) Hill climbing Search A* search Best-First Search Greedy Search
3
BLIND SEARCH ALGORITHM
1 . Depth First Search ( DFS ) : The algorithm : open : = [ start ]; closed : = [ ]; while open != [ ] do remove leftmost state from open, call it X; if X is a goal then return success; else begin generate children of X ; put X on closed; eliminate children of X if already in open or closed; put other children in order on left end of open; end end while return failure;
4
EXAMPLE (Depth-First Search - DFS)
Depth First Search examines the nodes in the following order: A, B, E, K, S, L, T, F, M, C, G, N, H, O, P, U, D, I, Q, J, R OPEN A B C D E F C D K L F C D S L F C D L F C D T F C D F C D M C D C D G H D N H D H D O P D P D U D CLOSED A B A E B A K E B A S K E B A L S K E B A T L S K E B A F T L S K E B A M F T L S K E B A C M F T L S K E B A G C M F T L S K E B A N G C M F T L S K E B A H N G C M F T L S K E B A O H N G C M F T L S K E B A P O H N G C M F T L S K E B A Start A B C D E F G H I J K L M N O P Q R S T Goal U
5
Depth-First Search (DFS)
go(X,X,[X]). go(X,Y,[X|T]):- link(X,Z), go(Z,Y,T). E B | ?- go(a,c,X). X = [a,e,f,c] ? ; X = [a,b,f,c] ? ; X = [a,b,c] ? ; no D F F C C C This simple search algorithm uses Prolog’s unification routine to find the first link from the current node then follows it. It always follows the left-most branch of the search tree first; following it down until it either finds the goal state or hits a dead-end. It will then backtrack to find another branch to follow. = depth-first search.
6
BLIND SEARCH ALGORITHM
2 . Breadth First Search ( BFS ) : The algorithm : open : = [ start ]; closed : = [ ]; while open != [ ] do remove leftmost state from open, call it X; if X is a goal then return success; else begin generate children of X ; put X on closed; eliminate children of X if already in open or closed; put other children in order on right end of open; end end while return failure;
7
Example (Breadth-First Search - BFS)
Initial state A A B C D E F Goal state G H I J K L L M N O P Q R S T U V W X Y Z Press space to see a BFS of the example node set
8
A A B B C C D D E E F F G G H H I I J J K K L L L L L L M N O P Q R S
This node is then expanded to reveal further (unexpanded) nodes. Press space Node A is removed from the queue. Each revealed node is added to the END of the queue. Press space to continue the search. We begin with our initial state: the node labeled A. Press space to continue Node B is expanded then removed from the queue. The revealed nodes are added to the END of the queue. Press space. We then backtrack to expand node C, and the process continues. Press space The search then moves to the first node in the queue. Press space to continue. A A B B C C D D E E F F G G H H I I J J K K L L L L L L M N O P Node L is located and the search returns a solution. Press space to end. Q R S T U Press space to begin the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Press space to continue the search Size of Queue: 0 Size of Queue: 10 Size of Queue: 9 Size of Queue: 6 Size of Queue: 8 Size of Queue: 1 Size of Queue: 0 Size of Queue: 7 Size of Queue: 5 Queue: Empty Queue: G, H, I, J, K, L, M, N, O, P Queue: A Queue: I, J, K, L, M, N, O, P, Q, R Queue: L, M, N, O, P, Q, R, S, T, U Queue: Empty Queue: E, F, G, H, I, J, K, L Queue: D, E, F, G, H, I, J Queue: J, K, L, M, N, O, P, Q, R, S Queue: K, L, M, N, O, P, Q, R, S, T Queue: F, G, H, I, J, K, L, M, N Queue: B, C, D, E, F Queue: H, I, J, K, L, M, N, O, P, Q Queue: C, D, E, F, G, H Nodes expanded: 8 Nodes expanded: 9 Nodes expanded: 10 Nodes expanded: 11 Nodes expanded: 7 Nodes expanded: 4 Nodes expanded: 5 Nodes expanded: 3 Nodes expanded: 6 Nodes expanded: 2 Nodes expanded: 0 Nodes expanded: 1 Current Action: Backtracking Current Action: Backtracking Current Action: Expanding Current Action: Current Action: Expanding Current Action: Expanding Current Action: Backtracking Current Action: Backtracking Current Action: Backtracking Current Action: Expanding Current Action: Expanding Current Action: Expanding FINISHED SEARCH Current Action: Backtracking Current Action: Backtracking Current Action: Backtracking Current Action: Backtracking Current Action: Expanding Current Action: Expanding Current Action: Expanding Current Action: Backtracking Current Action: Expanding Current level: 2 Current level: 1 Current level: 0 Current level: 0 Current level: 1 Current level: 1 Current level: 1 Current level: 0 Current level: 0 Current level: 1 Current level: n/a Current level: 0 Current level: 1 Current level: 0 Current level: 1 Current level: 2 Current level: 2 Current level: 1 Current level: 1 Current level: 0 Current level: 1 Current level: 2 Current level: 1 Current level: 1 Current level: 2 Current level: 1 Current level: 2 Current level: 0 BREADTH-FIRST SEARCH PATTERN
9
Another EXAMPLE (Breadth-First Search - BFS)
Breadth First Search examines the nodes in the following order: A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U OPEN A BCD CDEF DEFGH EFGHIJ FGHIJKL GHIJKLM HIJKLMN IJKLMNOP JKLMNOPQ KLMNOPQR LMNOPQRS MNOPQRST NOPQRST OPQRST PQRST QRSTU RSTU STU TU U CLOSE A BA CBA DCBA EDCBA FEDCBA GFEDCBA HGFEDCBA IHGFEDCBA JIHGFEDCBA KJIHGFEDCBA LKJIHGFEDCBA MLKJIHGFEDCBA NMLKJIHGFEDCBA ONMLKJIHGFEDCBA PONMLKJIHGFEDCBA QPONMLKJIHGFEDCBA RQPONMLKJIHGFEDCBA SRQPONMLKJIHGFEDCBA TSRQPONMLKJIHGFEDCBA hold A Initial state B C D E F G H I J K L M N O P Q R S T Goal state U
10
Breadth-First Search (BFS)
| ?- go(a,c,X). X = [a,b,c] ? ; X = [a,e,f,c] ? ; X = [a,b,f,c] ? ; no E B 1st Depth-first = A,ED,FC,BFC,C Breadth-first = A,EB,DFFC,CC D F F C 2nd C C 3rd A simple, common alternative to depth-first search is: breadth-first search. This checks every node at one level of the space, before moving onto the next level.
11
Blind Search Strategies
Breadth-first search Expand all the nodes of one level first. Depth-first search Expand one of the nodes at the deepest level.
12
What is the Complexity of Breadth-First Search?
Time Complexity assume (worst case) that there is 1 goal leaf at the RHS so BFS will expand all nodes = 1 + b + b bd = O (bd) Space Complexity how many nodes can be in the queue (worst-case)? at depth d-1 there are bd unexpanded nodes in the Q = O (bd) d=0 d=1 d=2 G d=0 d=1 d=2 G
13
Examples of Time and Memory Requirements for Breadth-First Search
Depth of Nodes Solution Expanded Time Memory millisecond 100 bytes seconds 11 kbytes 4 11, seconds 1 megabyte hours 11 giabytes years 111 terabytes Assuming b=10, 1000 nodes/sec, 100 bytes/node
14
What is the Complexity of Depth-First Search?
Time Complexity assume (worst case) that there is 1 goal leaf at the RHS so DFS will expand all nodes =1 + b + b bd = O (bd) = O (bm) Space Complexity how many nodes can be in the queue (worst-case)? at depth 1 < d we have b-1 nodes at max. depth d we have b nodes total = (d-1)*(b-1) + b = O(bd) = O(bm) d=0 d=1 d=2 G d=0 d=1 d=2 d=3 d=4
15
Blind Search Strategies (cont.)
Complexity Criterion Breadth-First Depth-First Time Space Optimal? Complete? b: branching factor d: solution depth m: maximum depth
16
Blind Search Strategies (cont.)
Complexity Criterion Breadth-First Depth-First Time bd bm Space Optimal? Yes No Complete? b: branching factor d: solution depth m: maximum depth
17
Depth-first vs. Breadth-first
Advantages of depth-first: Simple to implement; Needs relatively small memory for storing the state-space. Disadvantages of depth-first: Sometimes fail to find a solution (may be get stuck in an infinite long branch) - not complete; Not guaranteed to find an optimal solution (may not find the shortest path solution); Can take a lot longer to find a solution. Advantages of breadth-first: Guaranteed to find a solution (if one exists) - complete; Depending on the problem, can be guaranteed to find an optimal solution. Disadvantages of breadth-first: More complex to implement; Needs a lot of memory for storing the state space if the search space has a high branching factor.
18
HEURISTIC SEARCH ALGORITHM
Heuristic : we use heuristic function or knowledge in order to explore the most promising state first. [Heuristics are formalized as rules for choosing those branches in a state space that are most likely to lead to an acceptable solution] This may lead to sub-optimal solution or fail to find any solution. (i.e., heuristics do not guarantee best solution or even a solution) AI problem-solver employ heuristic into basic situations : 1 . Problem may not have an exact solution because of inherent ambiguities in a problem statement or available data (Examples: medical diagnosis, vision). 2 . Inefficient exact method to solve the problem (not feasible to examine every state ( e.g. theorem proving and chess game).
19
EXAMPLE Heuristic function for 8-tile puzzle 1 . The number of states out of place. [the state that has fewest tiles out of place is probably closer to the desired goal and would be best to examine next] 2 . The summation distance between each tile and it’s correct position in the goal state.
20
EXAMPLE (cont.) Goal 6 5 4 3 2 the number of direct tile reversals
6 5 4 3 2 the number of direct tile reversals Sum of distances out of place Tiles out of place 3 8 2 4 6 1 5 7 3 8 2 4 1 5 6 7 3 2 1 4 8 5 6 7 3 8 2 4 6 1 5 7 Three heuristics applied to states in the 8-puzzle
21
HILL-CLIMBING SEARCH - Expand the current state in the search and evaluate it’s children . - The best child is selected for further expansion . - neither it sibling nor its parent are retained . - Search halts when it reaches a state that is better than any of its children . (i.e. The process ends when all operators have been applied and none of the resulting states are better than the current state) - Ve : 1 . Local maximum . 2 . Flat area . 3 . Cant backtrack. (WHY?) + Ve : Low memory requirement .
22
EXAMPLE on HILL-CLIMBING
S → B → E → G1 SEARCH PATH = [S0, B1, E2, G13] Cost = 1+2+3=6 S A B C D E F H G1 I G2 4 1 3 2
23
HILL-CLIMBING SEARCH (cont.)
Hill climbing is an optimization technique which belongs to the family of local search. It is best used in problems with “the property that the state description itself contains all the information needed for a solution” The algorithm is memory efficient since it does not maintain a search tree: It looks only at the current state and immediate future states. Hill climbing attempts to iteratively improve the current state by means of an evaluation function. “Consider all the [possible] states laid out on the surface of a landscape. The height of any point on the landscape corresponds to the evaluation function of the state at that point”.
24
HILL-CLIMBING SEARCH (cont.)
In contrast with other iterative improvement algorithms, hill-climbing always attempts to make changes that improve the current state. In other words, hill-climbing can only advance if there is a higher point in the adjacent landscape. For example, hill climbing can be applied to the travelling salesman problem. It is easy to find an initial solution that visits all the cities but will be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.
25
TSP Example A, B, C and D are cities. Visit all cities with shortest path. A B C D 4 6 3 5 1 2 ABCD = 18 BACD = 13 - take it ABCD = 18 - nope BCAD take it (is the optimum) CBAD - 18 BACD - 13 BCDA - 18 DCAB - 13 nope
26
HILL-CLIMBING SEARCH (cont.)
Hill Climbing can get stuck at local maxima. Consider the following tree. a is an initial state and h and k are final states. The numbers near the states are the heuristic values. When hill climbing is run on the tree, we get a -> f -> g and here we get stuck at local maxima g. Hill climbing can't go back and make a new choice (for example j or e) because it keeps no history. So how to avoid this stuck in order to get global maxima.
27
HILL-CLIMBING SEARCH (cont.)
A common way to avoid getting stuck in local maxima with Hill Climbing is to use random restarts. In our example, if g is a local maxima, the algorithm would stop there and then pick another random node to restart from. So if j or c were picked (or possibly a, b, or d) you would find the global maxima in h or k. If once again you get stuck at some local maxima you have to restart again with some other random node. Generally there is a limit on the no. of times you can re-do this process of finding the optimal solution. After you reach this limit, you select the least amongst all the local maxima you reached during the process. Clean and repeat enough times (iterative) and you'll find the global maxima or something close. Hill Climbing is NOT complete and can NOT guarantee to find the global maxima. The benefit is that it requires a fraction of the resources; it's a very effective solution (optimization).
28
BEST FIRST SEARCH Definition
Is another more informed heuristic algorithm. Best-first search in its most general form is a simple heuristic search algorithm. “Heuristic” here refers to a general problem-solving rule or set of rules that do not guarantee the best solution or even any solution, but serves as a useful guide for problem-solving. Best-first search is a graph-based search algorithm, meaning that the search space can be represented as a series of nodes connected by paths.
29
BEST FIRST SEARCH (cont.)
How it works The name “best-first” refers to the method of exploring the node with the best “score” first. An evaluation function is used to assign a score to each candidate node. The algorithm maintains two lists, one containing a list of candidates yet to explore (OPEN), and one containing a list of already visited nodes (CLOSED). States in OPEN are ordered according to some heuristic estimate of their “closeness” to a goal. This ordered OPEN list is referred to as priority queue. Since all unvisited successor nodes of every visited node are included in the OPEN list, the algorithm is not restricted to only exploring successor nodes of the most recently visited node. In other words, the algorithm always chooses the best of all unvisited nodes that have been graphed, rather than being restricted to only a small subset, such as immediate neighbors. Other search strategies, such as depth-first and breadth-first, have this restriction. The advantage of this strategy is that if the algorithm reaches a dead-end node, it will continue to try other nodes.`
30
BEST FIRST SEARCH (cont.)
Algorithm Best-first search in its most basic form consists of the following algorithm : The 1st step is to define the OPEN list with a single node, the starting node. The 2nd step is to check whether or not OPEN is empty. If it is empty, then the algorithm returns failure and exits. The 3rd step is to remove the node with the best score, n, from OPEN and place it in CLOSED. The 4th step “expands” the node n, where expansion is the identification of successor nodes of n. The 5th step then checks each of the successor nodes to see whether or not one of them is the goal node. If any successor is the goal node, the algorithm returns success and the solution, which consists of a path traced backwards from the goal to the start node. Otherwise, proceeds to the sixth step. In 6th step, for every successor node, the algorithm applies the evaluation function, f, to it, then checks to see if the node has been in either OPEN or CLOSED. If the node has not been in either, it gets added to OPEN. Finally, the 7th step establishes a looping structure by sending the algorithm back to the 2nd step. This loop will only be broken if the algorithm returns success in step 5 or failure in step 2.
31
BEST FIRST SEARCH (cont.)
Algorithm (con.) The algorithm is represented here in pseudo-code: 1. Define a list, OPEN, consisting solely of a single node, the start node, s. 2. IF the list is empty, return failure. 3. Remove from the list the node n with the best score (the node where f is the minimum), and move it to a list, CLOSED. 4. Expand node n. 5. IF any successor to n is the goal node, return success and the solution (by tracing the path from the goal node to s). 6. FOR each successor node: a) apply the evaluation function, f, to the node. b) IF the node has not been in either list, add it to OPEN. 7. Looping structure by sending the algorithm back to the 2nd step.
32
EXAMPLE on Best-First Search
open=[S0]; closed=[ ] open=[B1 , A4]; closed=[S0] open=[E2 , F3 , A4]; closed=[S0 , B1] open=[F3 , G13 , A4 , H4]; closed=[S0 , B1 , E2] open=[G21 , I2 , G13 , A4 , H4]; closed=[S0 , B1 , E2 , F3] S A B C D E F H G1 I G2 4 1 3 2 SEARCH PATH = [S0, B1, E2, F3, G21] Cost = 1+3+1=5
33
BEST FIRST SEARCH (cont.)
Is best first algorithm will always find the shortest path ? Example : 50 100 G 40 60 1 4 20 50 90 30 1 2 G 3
34
BEST FIRST SEARCH (cont.)
1 . It may get stuck in an infinite branch that doesn’t contain the goal It’s not guarantee to find the shortest path solution . Memory requirement : In best case : as depth first search . In average case : between depth and breadth . In worst case : as breadth first search .
35
GREEDY BEST FIRST SEARCH
Greedy best-first search uses heuristic estimate h(n). EXAMPLE: S: Initial state, G1,G2: goal. Table shows the heuristic estimates: S A B C D E F H G1 I G2 node h(n) A 11 D 8 H 7 B 5 E 4 I 3 C 9 F 2
36
GREEDY BEST FIRST SEARCH (cont.)
Solution: S , B , F , G2 Cost = 1+3+1=5 node h(n) A 11 D 8 H 7 B 5 E 4 I 3 C 9 F 2 S A B h(n)=11 h(n)=5 S B F G2 1 3 Search Path B E F h(n)=4 h(n)=2 F I G2 h(n)=3 h(n)=0 Obtain best solution than best-first. But not guaranteed the optimum solution
37
SOLUTIONS AND A ALGORITHM
Solution for the worst case : we will use the n-beam search algorithm , that keep just the best states in the memory . Solution for the not finding the goal or shortest path : Is to use heuristic function that takes into consider how far a state from initial state , so that for any state : f ( n ) = g ( n ) + h ( n ) A algorithm Where : g ( n ) : measures the distance between the initial state and state of n. [actual length of the path start - n] h ( n ) : is a heuristic estimate of the distance from state n to a goal.
38
SOLUTIONS ( CONT … ) Is it get the shortest path with these solutions ? Example : h ( n ) = 5 h ( n ) = 7 No , because some times the value of h ( n ) is overestimated and not the actual value . f ( n ) = 6 g ( n ) = 1 f ( n ) = 8 h ( n ) = 4 h( n ) = 1 f ( n ) = 3 f ( n ) = 6 g ( n ) = 2 h ( n ) = 2 G f ( n ) = 5 g ( n ) = 3
39
A AND A* ALGORITHM F ( n ) is used to avoid getting stuck in an infinite long branch . When the best first search algorithm use the format : f ( n ) = h ( n ) + g ( n ) , then it will be called A algorithm But A algorithm doesn’t always give us the shortest path , because of overestimate for h ( n ) Example : in the last graph , h ( n ) = 7 that’s mean we need to 7 movement to reach the goal .
40
A* Algorithm f(n) = h(n) + g(n)
If we guarantee that there is no overestimate for h(n), we will guarantee that f(n) = h(n) + g(n) give us the shortest path, If h(n) <= h*(n) for all n. If h(n) <= h*(n) then h(n) is called Admissible heuristic where : h(n) is the heuristic value . h*(n) is the actual value . This algorithm called A* algorithm . A* Algorithm f(n) = h(n) + g(n)
41
EXAMPLE on A* ALGORITHM (cont.)
EXAMPLE: Use A* search algorithm to find the solution. Initial state: S, Goal state: G1 or G2 Solution: f(A)=h(A)+g(A)=11+4=15 f(B)=h(B)+g(B)=5+1=6 f(E)=h(E)+g(E)=4+2=6 f(F)=h(F)+g(F)=2+3=5 f(I)=h(I)+g(I)=3+2=5 f(G2)=h(G2)+g(G2)=0+1=1 S , B , F , G2 node h(n) A 11 D 8 H 7 B 5 E 4 I 3 C 9 F 2 S A B C D E F H G1 I G2 4 1 3 2 g(A)=4 g(B)=1 g(F)=3 g(E)=2 g(C)=1 g(D)=2 g(H)=4 g(G1)=3 g(I)=2 g(G2)=1
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.