Download presentation
Presentation is loading. Please wait.
Published byArron McDonald Modified over 9 years ago
1
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR
2
Outline Evaluation of Search Strategies Informed Search Strategies Best-first search Greedy best-first search A * search Admissible Heuristics Memory-Bounded Search Iterative-Deepening A* Search Recursive Best-First Search Simplified Memory-Bounded A* Search Summary
3
Evaluation of Search Strategies Search algorithms are commonly evaluated according to the following four criteria; Completeness: does it always find a solution if one exists? Time complexity: how long does it take as a function of number of nodes? Space complexity: how much memory does it require? Optimality: does it guarantee the least-cost solution? Time and space complexity are measured in terms of: b – max branching factor of the search tree d – depth of the least-cost solution m – max depth of the search tree (may be infinity)
4
Before Starting… olution Solution is a sequence of operators that bring you from current state to the goal state. search strategy The search strategy is determined by the order in which the nodes are expanded. Uninformed Search Strategies Uninformed Search Strategies; use only information available in the problem formulation. Breadth-first Uniform-cost Depth-first Depth-limited Iterative deepening
5
Uninformed search strategies look for solutions by systematically generating new states and checking each of them against the goal. This approach is very inefficient in most cases. Most successor states are “obviously” a bad choice. Such strategies do not know that because they have minimal problem-specific knowledge.
6
problem-specific knowledge Informed search strategies exploit problem-specific knowledge as much as possible to drive the search. They are almost always more efficient than uninformed searches and often also optimal. Also called heuristic search. Uses the knowledge of the problem domain to build an evaluation function f. desirability of expanding The search strategy: For every node n in the search space, f(n) quantifies the desirability of expanding n, in order to reach the goal. Informed Search Strategies
7
Uses the desirability value of the nodes in the fringe to decide which node to expand next. f is typically an imperfect measure of the goodness of the node. The right choice of nodes is not always the one suggested by f. It is possible to build a perfect evaluation function, which will always suggest the right choice. The general approach is called “best-first search”.
8
Best-First Search most desirable Choose the most desirable (seemly-best) node for expansion based on evaluation function. Lowest cost/highest probability evaluation Implementation; Fringe is a priority queue in decreasing order of desirability. desirability of node n Evaluation Function; f(n): select a node for expansion (usually the lowest cost node), desirability of node n. Heuristic function; h(n): estimates cheapest path cost from node n to a goal node. g(n) = cost from the initial state to the current state n.
9
Best-First Search Strategy; Pick “best” element of Q. (measured by heuristic value of state) Add path extensions anywhere in Q. (it may be more efficient to keep the Q ordered in some way, so as to make it easier to find the ‘best’ element) There are many possible approaches to finding the best node in Q; Scanning Q to find lowest value, Sorting Q and picking the first element, Keeping the Q sorted by doing “sorted” insertions, Keeping Q as a priority queue.
10
Several kinds of best-first search introduced Greedy best-first search A* search
11
Greedy Best-First Search Estimation function: heuristic function Expand the node that appears to be closest to the goal, based on the heuristic function only; f(n) = h(n) = estimate of cost from n to the closest goal Ex: the straight line distance heuristics h SLD (n) = straight-line distance from n to Bucharest Greedy search expands first the node that appears to be closest to the goal, according to h(n). “greedy” – at each search step the algorithm always tries to get close to the goal as it can.
12
Romania with step costs in km
13
h SLD (In(Arad)) = 366 Notice that the values of h SLD cannot be computed from the problem itself. It takes some experience to know that h SLD is correlated with actual road distances Therefore a useful heuristic
18
Properties of Greedy Search Complete? No – can get stuck in loops / follows single path to goal. Ex; Iasi > Neamt > Iasi > Neamt > … Complete in finite space with repeated-state checking. Time? O(b^m) but a good heuristic can give dramatic improvement. Space? O(b^m) – keeps all nodes in memory Optimal? No.
19
A* Search Idea; avoid expanding paths that are already expensive. Evaluation function: f(n) = g(n) + h(n) with; g(n) – cost so far to reach n h(n) – estimated cost to goal from n f(n) – estimated total cost of path through n to goal A* search uses an admissible heuristic, that is, h(n) h*(n) where h*(n) is the true cost from n. Ex: h SLD (n) never overestimates actual road distance. Theorem: A* search is optimal.
20
When h(n) = actual cost to goal Only nodes in the correct path are expanded Optimal solution is found When h(n) < actual cost to goal Additional nodes are expanded Optimal solution is found When h(n) > actual cost to goal Optimal solution can be overlooked
27
Optimality of A* (standard proof) 1 Suppose some suboptimal goal G 2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G 1.
28
Consistent Heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') n' = successor of n generated by action a The estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n‘ If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) if h(n) is consistent then the values of f(n) along any path are nondecreasing Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal
29
Optimality of A* (more useful proof)
30
f-contours 30 How do the contours look like when h(n) =0? Uniformed search Uniformed search; Bands circulate around the initial state A* search A* search; Bands stretch toward the goal and is narrowly focused around the optimal path if more accurate heuristics were used
31
Properties of A* Search Complete? Yes – unless there are infinitely many nodes with f f(G). Time? O(b^d ) – Exponential. [(relative error in h) x (length of solution)] Space? O(b^d) – Keeps all nodes in memory. Optimal? Yes – cannot expand f i+1 until f i is finished.
32
Admissible Heuristics
34
Relaxed Problem Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem. If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution. If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution. Key point Key point; the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem
35
Dominance Definition: If h2(n) h1(n) for all n (both admissible) then h2 dominates h1. For 8-puzzle, h2 indeed dominates h1. h1(n) = number of misplaced tiles h2(n) = total Manhattan distance If h2 dominates h1, then h2 is better for search. For 8-puzzle, search costs: d = 14 IDS = 3,473,941 nodes (IDS = Iterative Deepening Search) A(h1) = 539 nodesA(h2) = 113 nodes d = 24 IDS 54,000,000,000 nodes A(h1) = 39,135 nodesA(h2) = 1,641 nodes
36
Memory-Bounded Search Iterative-Deepening A* Search Recursive Best-First Search Simplified Memory-Bounded A* Search
37
Iterative-Deepening A* Search (IDA*) The idea of iterative deepening was adapted to the heuristic search context to reduce memory requirements At each iteration, DFS is performed by using the f-cost (g+h) as the cutoff rather than the depth Ex; the smallest f-cost of any node that exceeded the cutoff on the previous iteration
38
Properties of IDA* IDA* is complete and optimal Space complexity: O(bf(G)/ δ ) ≈ O(bd) δ : the smallest step cost f(G): the optimal solution cost Time complexity: O( α b^d) α : the number of distinct f values smaller than the optimal goal Between iterations, IDA* retains only a single number; the f-cost IDA* has difficulties in implementation when dealing with real- valued cost
39
Recursive Best-First Search (RBFS) Attempt to mimic best-first search but use only linear space Can be implemented as a recursive algorithm. Keep track of the f-value of the best alternative path from any ancestor of the current node. If the current node exceeds the limit, then the recursion unwinds back to the alternative path. As the recursion unwinds, the f-value of each node along the path is replaced with the best f-value of its children.
40
Ex: The Route finding Problem
43
Properties of RBFS RBFS is complete and optimal Space complexity: O(bd) Time complexity : worse case O(b^d) Depend on the heuristics and frequency of “mind change” The same states may be explored many times
44
Simplified Memory-Bounded A* Search (SMA*) Make use of all available memory M to carry out A* Expanding the best leaf like A* until memory is full When full, drop the worst leaf node (with highest f- value) Like RBFS, backup the value of the forgotten node to its parent if it is the best among the sub-tree of its parent. When all children nodes were deleted/dropped, put the parent node to the fringe again for further expansion.
45
Properties of SMA* Is complete if M ≥ d Is optimal if M ≥ d Space complexity: O(M) Time complexity : worse case O(b ^ d)
46
Summary This chapter has examined the application of heuristics to reduce search costs. We have looked at number of algorithms that use heuristics, and found that optimality comes at a stiff price in terms of search cost, even with good heuristics. Best-first search is just GENERAL-SEARCH where the minimum- cost nodes (according to some measure) are expanded first. If we minimize the estimated cost to reach the goal, h(n), we get greedy search. The search time is usually decreased compared to an uninformed algorithm, but the algorithm is neither optimal nor complete. Minimizing f(n) = g(n) + h(n) combines the advantages of uniform-cost search and greedy search. If we handle repeated states and guarantee that h(n) never overestimates, we get A* search.
47
A* is complete, optimal, and optimally efficient among all optimal search algorithms. Its space complexity is still prohibitive. The time complexity of heuristic algorithms depends on the quality of the heuristic function. Good heuristics can sometimes be constructed by examining the problem definition or by generalizing from experience with the problem class. We can reduce the space requirement of A* with memory-bounded algorithms such as IDA* (iterative deepening A*) and SMA* (simplified memory-bounded A*).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.