Download presentation
Presentation is loading. Please wait.
Published byAnna Hodge Modified over 8 years ago
1
ARTIFICIAL INTELLIGENCE Dr. Seemab Latif Lecture No. 4
2
Dr. Seemab Latif Review of Previous Lecture-1 Using search to solve the problems-the process of looking for a sequence of actions that reaches the goal is called searching. Goal formulation and Problem formulation for search algorithm (Formulate Search Execute) A problem can be defined by five components: Initial state Actions Transition model Goal test Path cost State space of the problem is the set of all states reachable from the initial state by any sequence of actions.
3
Dr. Seemab Latif Review of Previous Lecture-2 Tree Search and Graph Search algorithms Tree search algorithm does not keep record of repeated states while graph search algorithm records repeated states. All search algorithms share basic structure; the vary primarily according to how they choose which state to expand next- known as search strategy. Evaluation criteria for search strategy: Completeness: Time complexity Space complexity Optimality
4
Dr. Seemab Latif Review of Previous Lecture-3 Search strategy classification: Uninformed search strategies- have no additional information about states beyond that provided in the problem definition. Breadth-first search (open list is FIFO queue) Uniform-cost search (shallowest node first) Depth-first search (open list is a LIFO queue) Depth-limited search (DFS with cutoff) Iterative-deepening search (incrementing cutoff) Bi-directional search (forward and backward) Informed search strategies
5
Dr. Seemab Latif Informed search methods use problem-specific knowledge to improve average search performance. While uninformed search methods can in principle find solutions to any state space problem, they are typically too inefficient to do so in practice. Informed/Heuristic Search
6
Dr. Seemab Latif What are heuristics? Heuristic: problem-specific knowledge that reduces expected search effort. Informed search uses a heuristic evaluation function that denotes the relative desirability of expanding a node/state. often include some estimate of the cost to reach the nearest goal state from the current state. In blind search techniques, such knowledge can be encoded only via state space and operator representation.
7
Dr. Seemab Latif Examples of Heuristics Travel planning Euclidean distance 8-puzzle Manhattan distance Number of misplaced tiles Traveling salesman problem Minimum spanning tree Where do heuristics come from?
8
Dr. Seemab Latif Heuristics from Models Heuristics can be generated via simplified models of the problem Simplification can be modelled as deleting constraints on operators/actions Key property: Heuristic can be calculated efficiently
9
Dr. Seemab Latif Informed Search Strategies Best-first search: greedy best-first A* Ordered depth-first Memory-bounded search: Iterative deepening A* (IDA*) Simplified memory-bounded A* (SMA*) Recursive Best First Search (RBFS) Time-bounded search: Anytime A* RTA* (searching and acting) Iterative improvement algorithms (generate-and-test approaches): Steepest ascent hill-climbing Random-restart hill-climbing Simulated annealing Multi-Level/Multi-Dimensional Search: Hierarchical A* Blackboard
10
Dr. Seemab Latif One Way of Introducing Heuristic Knowledge into Search – Heuristic Evaluation Function Heuristic evaluation function h : Ψ R, where Ψ is a set of all states and R is a set of real numbers, maps each state s in the state space Ψ into a measurement h (s) which is an estimate of the cost extending of the cheapest path from s to a goal node. Node A has 3 children. h(s1)=0.8, h(s2)=2.0, h(s3)=1.6 The value refers to the cost involved for an action. A continued search at s1 based on h(s1) being the smallest is ‘heuristically’ the best.
11
Dr. Seemab Latif Best-First Search Instance of Tree-Search or Graph-Search algorithm In this search algorithm, a node is selected for expansion based on an evaluation function, f (n) The evaluation function is taken as a cost estimate, so the node with the lowest evaluation is expanded first. The implementation of best-first graph search is identical to that for uniform-cost search, except for the use of f instead of g to order the priority queue.
12
Dr. Seemab Latif Best-First Search Idea: use an evaluation function for each node, which estimates its “desirability” Expand most desirable unexpanded node Implementation: open list is sorted in decreasing order of desirability A combination of depth first (DFS) and breadth first search (BFS). Go depth-first until node path is no longer the most promising one (lowest expected cost) then backup and look at other paths that were previously promising ( and now are the most promising) but not pursued. At each search step pursuing in a breath-first manner the paths that has lowest expected cost.
13
Dr. Seemab Latif Best-First Search Start with OPEN LIST containing just the initial state. Until a goal is found or there are no nodes left on OPEN LIST do: Pick the best node (based on the heuristic function) on OPEN LIST. If it is a goal node, return the solution otherwise place node on the CLOSED LIST list Generate its successors. For each successor node do: If it has not been generated before (i.e., not on CLOSED list), evaluate it, add it to OPEN, and record its parent. If it has been generated before, change the parent if this new path is better than the previous one. In that case, update the cost of getting to this node and to any successors that this node may already have
14
Dr. Seemab Latif Greedy Best First Search Simple form of best-first search Heuristic evaluation function h(n) estimates the cost from n to the closest goal Example: straight-line distance from city n to goal city Greedy search expands the node (on OPEN list) that appears to be closest to the goal
15
Dr. Seemab Latif Road Map f Romania Solution using Greedy best first search Optimal Solution
16
Greedy Search Path cost using Greedy Search is 140+99+211 = 450 Minimum path cost is 140+80+97+101 = 418
17
Dr. Seemab Latif Evaluation of Greedy Search Complete? No, can get stuck in loops if not maintaining Closed list. Time?? O(b m ), but a good heuristic can give dramatic improvement where m is the maximum depth of the search space Space?? O(b m ), keeps all nodes in memory Optimal?? No (minimum cost path in example is 418 rather than 450)
18
Dr. Seemab Latif GBFS is not Complete g c b a d start state goal state h(n) = straight line distance
19
Dr. Seemab Latif A* uses a best-first search and finds the least-cost path from a given initial node to one goal node (out of one or more possible goals). Widely used in path finding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. It is an extension of Edsger Dijkstra's algorithm A* (A-Star Search Algorithm)
20
Dr. Seemab Latif A* Search: Minimizing Total Path Cost Greedy Search minimizes the estimate cost to the goal, h(n), - not optimal and incomplete. Uniform Cost Search minimizes the cost of the path so far, g(n) and is optimal and complete but can be very inefficient. A* Search combines both Greedy Search h(n) and Uniform Cost Search g(n) to give f(n) which estimates the cost of the cheapest solution through n. A* is similar to best-first search except that the evaluation is based on total path (solution) cost: f(n) = g(n) + h(n) where: g(n) = cost of path from the initial state to n (goal state) h(n) = estimate of the distance to the goal
21
Dr. Seemab Latif A* Search: Minimizing Total Path Cost h(x)- admissible heuristic i.e. must not overestimate the distance to the goal. For application like routing, h(x) might represent the straight-line distance to the goal; since that is physically the smallest distance between two paths.
22
Dr. Seemab Latif A* Concept As A* traverses the graph, it follows a path of the lowest known cost, keeping a sorted priority queue of alternate path segments along the way. If, at any point, a segment of the path being traversed has a higher cost than another encountered path segment, it abandons the higher-cost path segment and traverses the lower-cost path segment instead. This process continues until the goal is reached.
23
Dr. Seemab Latif A* Search Algorithm Starting with the initial node, it maintains a priority queue of nodes to be traversed, known as the open set. The lower f(x) for a given node x, the higher its priority. At each step of the algorithm, the node with the lowest f(x) value is removed from the queue, the f and h values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a goal node has a lower f value than any node in the queue (or until the queue is empty). Goal nodes may be passed over multiple times if there remain other nodes with lower f values, as they may lead to a shorter path to a goal. The f value of the goal is then the length of the shortest path, since h at the goal is zero in an admissible heuristic. If the actual shortest path is desired, the algorithm may also update each neighbor with its immediate predecessor in the best path found so far; this information can then be used to reconstruct the path by working backwards from the goal node.
24
Dr. Seemab Latif A* vs Greedy Best First Search What sets A* apart from a greedy best-first search is that it also takes the distance already traveled into account; the g(x) part of the heuristic is the cost from the start, not simply the local cost from the previously expanded node.
25
Dr. Seemab Latif Greedy --Complete, but not Optimal… Manhattan Distance Heuristic: The distance between two points measured along axes at right angles. In a plane with p 1 at (x 1, y 1 ) and p 2 at (x 2, y 2 ), it is |x 1 - x 2 | + |y 1 - y 2 |.
26
Dr. Seemab Latif h Values
27
Dr. Seemab Latif g-Value
28
Dr. Seemab Latif f = g + h
31
Dr. Seemab Latif Admissible Heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance)
32
Dr. Seemab Latif Admissible heuristics E.g., for the 8-puzzle: Average Solution cost = 22 steps, true solution cost = 26, branching factor = 3 h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?
33
Dr. Seemab Latif Admissible heuristics E.g., for the 8-puzzle: Average Solution cost = 22 steps, true solution cost = 26, branching factor = 3 h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? 3+1+2+2+2+3+3+2 = 18
34
Dr. Seemab Latif Dominance If h 2 (n) ≥ h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search: it is guaranteed to expand less or equal no of nodes. Typical search costs (average number of nodes expanded): d=12IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes IDS- Iterative Deepening Search
36
Dr. Seemab Latif Monotonicity-Consistency Monotone (Consistency) heuristic For every node n and every successor n’ reached from n by action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus the estimated cost of reaching the goal from n’ h(n) < = cost of (n, a, n’) + h(n’) The deeper you go along a path the better (or as good) the estimate of the distance to the goal state the f value (which is g+h) never decreases along any path.
37
Dr. Seemab Latif Consistent h ⇒ Monotone f
38
Dr. Seemab Latif Properties of A* Complete: unless there are infinitely many nodes with f(n) < f* (path cost of optimal solution C*) A* is complete when: there is a positive lower bound on the cost of operators. the branching factor is finite. A* is maximally efficient For a given heuristic function, no optimal algorithm is guaranteed to do less work in terms of nodes expanded. Aside from ties in f, A* expands every node necessary for the proof that we’ve found the shortest path, and no other nodes. Time: not a drawback
39
Dr. Seemab Latif Properties of A* Space: as A* keeps all generated nodes in memory, it usually runs out of space long before it runs out of time. Not practical for many large-scale problems Optimal?? Admissible heuristics are optimistic by nature because they think the cost of solving the problem is less than it actually is. Tree-Search version of A* is optimal if h(n) is admissible Graph-Search version of A* is optimal if h(n) is consistent
40
Dr. Seemab Latif IDA* - Iterative deepening A* (Space/time trade-off) A* requires open (& close) list for remembering nodes Can lead to very large storage requirements Exploit the idea- the use of monotone f: (actual cost) and f(n) <= f(next node after n) create incremental subspaces searched depth-first much less storage Key issue is how much extra computation How bad an underestimate f, how many steps does it take to get Worse case N computation for A*, versus N 2 for IDA*
41
Dr. Seemab Latif IDA* - Iterative deepening A* Beginning with an f-cost( g + h ) equal to the f-cost of the initial state, perform a depth-first search bounded by the f-cost instead of a depth bound cutoff. Unless the goal is found, increase the f-cost to the lowest f-cost found in the previous search that exceeds the previous f-cost, and restart the depth first search.
42
Dr. Seemab Latif Iterative-Deepening-A* Algorithm 1) Set THRESHOLD = the heuristic evaluation of the start state (f-cost(g+h)). 2) Conduct a depth-first search based on minimal cost from current node, pruning any branch when its total cost function (g + h´) exceeds THRESHOLD. If a solution path is found during the search, return it as the optimal solution. 3) Otherwise, increment THRESHOLD by the minimum amount it was exceeded during the previous step, and then go to Step 2. Start state always on path, so initial estimate is always underestimate and never decreasing.
43
Dr. Seemab Latif Stages in an IDA* Search for B Nodes are labeled with f = g +h. The h values are the straight-line distances to B
44
Dr. Seemab Latif Experimental Results on IDA* IDA* is same time as A* but only O(d) in space - versus O(b d ) for A* Also avoids overhead of sorted queue of nodes IDA* is simpler to implement - no closed lists (limited open list). In Korf’s 15-puzzle experiments IDA*: solved all problems, ran faster even though it generated more nodes than A*. A*: solved no problems due to insufficient space; ran slower than IDA*
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.