Download presentation
Presentation is loading. Please wait.
1
STRUCTURES AND STRATEGIES FOR STATE SPACE SEARCH
2
INTRODUCTION We studied predicate calculus as an example of an artificial intelligence representational language. Well formed predicate calculus expressions provides a means of describing objects an relations in a problem domain, and inference rules such as modus ponens allow us to infer new knowledge from these descriptions. These inferences define a space that is searched to find a problem solution. Now in our present session we will discuss the theory of space search. The theory of state space search is our primary tool for successful design and implementation of search algorithms. We can use graph theory to analyze the structure and complexity of both the problem and the procedure used to solve it by representing the problem as a state space graph.
3
A graph consists of a set of nodes and a set of arcs or links connecting pairs of nodes. In the stats space model of problem solving the nodes of a graph are taken to represent discrete states of a problem solving process. Such as:- –Results of logical inferences –Configurations of a game board. The arcs of graph represent transitions between states or the act of applying a rule, such as:- –Logical inferences. –Legal moves of a game.
4
BRIDGES OF KONIGSBERG Graph theory is our best tool for reasoning about the structure of objects and relations. Graph theory was invented by a Austrian mathematician Leonhard Euler in eighteenth century to solve the Bridges of Konigsberg Problem. The city of Konigsberg occupied both banks and two islands of a river. The islands and the riverbanks were connected by seven bridges. The problem states “If there is a walk around the city that crosses each bridge exactly once”. To solve the problem Euler created an alternative representation for the map. –The riverbanks and the islands are described by the nodes of a graph and the bridges are represented by labeled arcs between nodes:- Riverbanks- rb1, rb2 Islands - i1, i2 Bridges- b1, b2,…,b7
5
Figure 3.1: The city of Königsberg.
6
Figure 3.2: Graph of the Königsberg bridge system.
8
Figure 3.3: A labeled directed graph.
9
Stated generally, search involves trying to find a particular object from a large number of such objects. A search space defines the set of objects that we are interested in searching among. A few Examples: Board games. The system searches for a move that is likely to lead to winning configuration. Scheduling systems. Systems which can schedule factory timings, requirement and timings of material inputs, production schedules, employment of personal, machine maintenance schedules, etc against operational constraints of manpower availability, delays in supplies, machine breakdowns, production dead lines etc. Theorem proving. Given the axioms and the rules of inference, the objective in this case is to find a proof whose last line is the formula that one wishes to prove. STATE SPACE SEARCH
11
A Goal May be describe as a state, such as:- * A winning board in TIC-TAC-TOE game. * A property of solution path.e.g. a shortest path or route. Arc Arc are steps in a solution process. Paths through the space represent solutions in various stages of completion. Paths are searched, beginning at the start state and continuing through the graph, until either the goal description is satisfied or the search is abandoned. Search Algorithm The task of search algorithm is to find. A solution path through a problem space.
13
TIC-TAC-TEO *There are3 9 (19683) ways to arrange (Blank, X, O) in nine spaces. *State space is a graph. *The graph is a “directed acyclic graph”. Complexity of Problem (No of possible move paths) *Nine possible first moves. *Eight possible responses to each first move at 2nd level. *Seven possible responses to each 2nd level move at 3rd level. And so on … *So there are, 9 x 8 x 7 …, or 9! possible games paths can be generated (9!= 362,880). *Chess has 10 120 possible game paths. *Checker has 10 40 possible game paths. Problem spaces with these large number of possible paths, are difficult or impossible to search exhaustively. Strategies must be defined to reduce the complexity of the problem. These strategies, however, rely on heuristics to reduce the complexity of the search.
14
Figure 3.6: State space of the 8-puzzle generated by “move blank” operations.
15
Example of The - 8- PUZZLE. 4 possible moves for Blank , , , State space is a graph. More than one paths may exist to reach to one node (state) or most states may have multiple parents. Therefore cycles are possible. GD is particular state or board configuration. When this state is found on a path the search terminates. The path from START to the GOAL is the desired series of moves.
16
The Traveling Salesperson A salesperson needs to visit five cities and then return home. GD. Find shortest possible path for the sales person to travel to cities and then return home. Sales person lives in A city. He will return to A city. So the state space is of (n-1) nodes. One possible path.A, D, C, B, E, A = 450 miles. GD requires a complete circuit with minimum distance covered. GD in this example is a property of entire path, rather than of a single state. Complexity Complexity of exhaustive search in this problem is (N-1)!, i.e. 5! (120) paths. For small number of cities the search is exhaustively possible. But for, problem instances, where N=50, for example, simple exhaustive search cannot be restored to. For a N! search grows so fast that very soon the search combinations become intractable.
17
Figure 3.7: An instance of the traveling salesperson problem.
18
Figure 3.8: Search of the traveling salesperson problem. Each arc is marked with the total weight of all paths from the start node (A) to its endpoint.
19
Techniques to Reduce Complexity Branch and Bound. * Generates path one at a time. * Keep track of best circuit found so far. * This value is used as bound for future candidates. * As paths are constructed one city at a time, the algorithm examines each partially completed path. * If the algorithm determines that the length of partially completed path is greater than the bound. The work on this path is eliminated. * This reduces search considerably..
20
100 75 125 E D C B A B C 100 50 D C B 75 C A 50 125 Branch and Bound The Traveling Salesperson Problem PATH: A,E,D,B,C,A = 375 N=6
21
Techniques to Reduce Complexity Nearest Neighbor. Travel to nearest city first. For example. In traveling salesperson problem the path through the nearest neighbor will be: A,E,D,B,C,A (path length= 550 miles) A highly efficient method (as only one path is to be tried). Still in a few situations this method may not provide shortest path. Still it provides possible compromise, when the time required makes exhaustive search impractical. In traveling salesperson problem, this technique does not provide the shortest path.
22
Figure 3.9:An instance of the traveling salesperson problem with the nearest neighbor path in bold. Note that this path (A, E, D, B, C, A), at a cost of 550, is not the shortest path. The comparatively high cost of arc (C, A) defeated the heuristic.
23
STRATEGIES FOR STATE SPACE SEARCH Data Driven and Goal Driven Search State space may be searched in two directions:- From given data of a problem instance towards a goal Data of a problemGoal From a goal back to the data GoalData
24
Data Driven Search(Also called Forward Chaining) The problem solver begins with given facts of the problem and a set of legal moves or rules for changing states. Search proceeds as follows:- Applying rules to facts to produce new facts. New facts are used by rules to produce more new facts. Process continues until it generates a path that satisfies the goal condition. Data driven search uses the knowledge and constraints found in the given data of a problem to guide search along lines known to be true.
25
Goal Driven Search (Also called backward Chaining) Take the goal that we want to solve. See what rules or legal moves could be used to generate this goal. Also determine what conditions must be true to use them. These conditions become the new goals or subgoals for the search. Reaching subgoals, determining new subgoals and so on… Search continues, working backward through successive subgoals until it works back to the facts of the problem. Goal driven search thus uses knowledge of the desired goal to guide the search through relevant rules and eliminate branches of the space.
26
Both data-driven an goal-driven problem solvers search the same state space graph, however, the order and actual number of states searched can be different. The preferred strategy is determined by the properties of the problem itself. These includes:- –The complexity of the rules. –The shape of the state space. –The nature and availability of the problem data.
27
Data-driven search will by preferred for problems in which:- –All are most of the data are given in the initial problem statement. –There are a large number of potential goals, but there are only a few ways to use the facts and given information of a particular problem instance. –It is difficult to form a goal or hypothesis. Goal-driven search will be preferred for problems in which:- –A goal or hypothesis is given in the initial problem statement or can easily be formulated. –There are a large number of rules that match the facts of the problem and thus produce an increasing number of conclusions or goals. Early selection of a goal can eliminate most of these branches, making goal driven search more effective. –Problem data are not given but must be acquired by the problem solver. In this case goal driven search can help guide data acquisition.
28
Figure 3.10: State space in which goal-directed search effectively prunes extraneous search paths.
29
IMPLEMENTING GRAPH SEARCH In solving a problem using either Goal-or-Data based search, a problem solver must find a path from a start to a goal through the state space graph. The sequence of the arcs in this path corresponds to the ordered steps of the solution. A problem solver must consider different paths until it finds a goal.
30
Backtracking Search The algorithm begins at start state, pursue a path until reaches either a goal or a dead end. If it finds a goal, it returns the path. It it finds a dead end:- It backtracks to the most recent node on the path (node S). Pursue a new path along one of unexamined child of node S. If the backtrack does not find a goal in this subgraph. Repeat the procedure for all siblings of node S. If none of the siblings leads to a goal. Then backtrack to the parent node of node S. The procedure may be applied to all siblings of node S, and so on.
31
Fig: 3.12
32
ALGORITHM FOR BACKTRACK SEARCH An algorithm which performs a backtrack search uses three lists to keep track of nodes in the state space:- SL (State List). Lists the states in the current path being tried. If the goal is found, SL contains the ordered list of states on the solution path. NSL (New State List). Lists the states in the current path being tried. If the goal is found SL contains the ordered list of states on the solution path. DE (Dead End). Lists the states whose descendants have failed to contain a goal node. It is used to detect any re-entry of such a state. To avoid re-entry of a state already occurred (to avoid loops), each newly generated state is tested for membership in above lists. If new state belongs to any of these lists, it has already been visited so may be ignored.
33
A trace of breadth_first_search on the graph of Figure 3.13
34
Algorithm Trace of Algorithm
35
Depth-First and Breath-First Search In addition to specifying a search direction (data-driven or goal- driven), a search algorithm must determine the order in which states are examined in the tree or the graph. We will considers two possibilities for the order in which the nodes of the graph are considered:- Depth-first search. Breadth-first search. Depth-first search, when a state is examined, all of its children and their descendants are examined before any of its siblings. Depth-first search goes deeper into the search space whenever this is possible. Only when no further descendants of a state can be found are its siblings considered. Depth-first search examines the states in the graph of Figure, in the order A, B, E, K, S, L, T, F, M, C, G, N, H, O, P, U, D, I, Q, J, R. This search can be implemented with backtrack algorithm.
36
Figure 3.13: Graph for breadth- and depth-first search examples.
37
Breadth-first search, in contrast, explores the space in a level- by-level fashion. Only when there are no more states to be explored at a given level does the algorithm move on to the next level. A breadth-first search of the graph of Figure considers the states in the order A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U. We implement breadth-first search using lists, open and closed, to keep track of progress through the state space. Open, like NSL in backtrack, lists the states that have been generated but whose children have not been examined. The order in which states are removed from open determines the order of the search. Closed records states that have already been examined.
38
1.Algorithm breadth first 2. Trace of algorithm 3. fig 3.14 graph highlighted 4. fig: 3.15 8-puzzle example
39
Child states are generated by inference rules, legal moves of a game, or other state transition operators. Each iteration produces all children of the state X and adds them to open. Open is maintained as a queue, or first-in-first-out (FIFO) data structure. States are added to the right of the list and removed fromthe left. This biases search toward the states that have been on open the longest, causing the search to be breadth-first. Child states that have already been discovered (already appear on either open or closed) are eliminated. If the algorithm terminates because the condition of the While Loop is no longer satisfied (Open = [ ] )then it has searched the entire graph without finding the desired goal; meaning the search has failed.
40
* Breadth-first search considers every node at each level of the graph before going deeper into the space, all states are first reached along the shortest path from the start state. * Breadth-first search is therefore guaranteed to find the shortest path from the start state to the goal. * If the path is required for a solution, it can be returned by the algorithm. * This can be done by storing ancestor information along with each state. * A state may be saved along with a record of its parent state, i.e. as a (state, parent) pair. * If this is done in the search of Figure, the contents of open and closed at the fourth iteration would be: Open = [(D,A), (E,B), (F,B),(D,C), (H, C)]: closed = [(C,A), (B,A), A,nil)]
41
* When a goal is found, the algorithm may construct the solution path by tracing back along parents from the goal to the start state. * A has a parent of nil, indicating that it is a start state: this stops reconstruction of the path. * Because breadth-first search finds each state along the shortest path and retains the first version of each state, this is the shortest path form a start to a goal.
42
Next we create a depth-first search algorithm * Now the descendant states are both added and removed from the left end of open. * Open is maintained as a stack, or last-in-first-out (LIFO) structure. * The organization of open as a stack biases search toward the most recently generated states, giving search a depth-first order.
43
1.Algorithm depth first 2. Trace of algorithm 3. fig 3.16 graph highlighted 4. fig: 3.17 8-puzzle example
44
A trace of breadth_first_search on the graph of Figure 3.13
45
Function breadth_first search algorithm
46
Advantages and Disadvantages Breadth-First * Because it always examines all the nodes at level n before proceeding to level n + 1, breadth-first search always finds the shortest path to a goal node. * In a problem where it known that a simple solution exists, this solution will be found. * If there is a bad branching factor, i.e. states have a high average number of descendants, the combinatorial explosion may prevent the algorithm from finding a solution using the available space. * The space utilization of breadth-first search, measured in terms of the number of states on open, is an exponential function of the length of the path at any time.
47
* If each state has an average of B children, the number of states on a given level is B times the number of states on the previous level. This gives B n states on level n. * Breadth-first search would place all of these on open when it begins examining level n. This can be prohibitive if solution paths are long.
48
Advantages and Disadvantages Depth-First * Depth-first search, gets quickly into a deep search space. * If it is known that the solution path will be long, depth-first search will not waste time searching a large number of “shallow” states in the graph. * On the other hand, depth-first search can get “lost” deep in a graph missing shorter path to a goal or even becoming stuck in an infinitely long path that does not lead to goal. * Depth-first search is much more efficient for search spaces with many branches because it does not have to keep all the nodes at a given level on the open list. * The space usage of depth-first search is a linear function of the length of the path. * At each level, open retains only the children of a single state. * If a graph has an average of B children per state, this requires a total space usage of B x n states to go n levels deep into space.
49
Depth First Search With Iterative Deepening As the depth search is likely to lost deep into a path, therefore it logical to use a depth-bound on the depth first search at certain level. This causes a breadth like sweep of the search space at that search level. When it is known that a solution lies within a certain depth or when time constraints limit the number of states that can be considered in a large space (like chess), then a depth first search with a depth-bound may be important. Algorithm - Depth First Iterative Deepening –It performs a depth first search of the space with a depth bound of 1. if it fails to find a goal, it performs another depth first search with a depth bound of 2. –This continues increasing the depth bound by one at each iteration. –At each iteration the algorithm performs a complete depth first search to the current depth bound. –Because the algorithm searches the space in a level by level fashion, it is guaranteed to find a shortest path to a goal. –Because it does only depth first search at each iteration, the space usage at any level n is B x n, where B is the average number of children of a node.
50
Figure 3.14: Graph of Figure 3.13 at iteration 6 of breadth-first search. States on open and closed are highlighted.
51
Function depth_first_search algorithm
52
A trace of depth_first_search on the graph of Figure 3.13
53
Figure 3.15: Breadth-first search of the 8-puzzle, showing order in which states were removed from open.
54
Figure 3.16: Graph of Figure 3.13 at iteration 6 of depth-first search. States on open and closed are highlighted.
55
Figure 3.17: Depth-first search of the 8-puzzle with a depth bound of 5.
56
Figure 3.18: State space graph of a set of implications in the propositional calculus.
57
Figure 3.19: And/or graph of the expression q Ÿ r p.
59
Figure 3.20: And/or graph of the expression q r p.
60
Figure 3.21: And/or graph of a set of propositional calculus expressions.
61
Figure 3.22: And/or graph of part of the state space for integrating a function, from Nilsson (1971).
62
The facts and rules of this example are given as English sentences followed by their predicate calculus equivalents:
63
Figure 3.23: The solution subgraph showing that fred is at the museum.
64
Five rules for a simple subset of English grammar are:
65
Figure 3.24: And/or graph searched by the financial advisor.
66
Figure 3.25:And/or graph for the grammar of Example 3.3.6. Some of the nodes (np, art, etc.) have been written more than once to simplify drawing the graph.
67
Figure 3.26: Parse tree for the sentence “The dog bites the man.” Note that this is a subtree of the graph of Figure 3.25.
68
Figure 3.27: A graph to be searched.
69
Problem Solving with Search In search, an intelligent agent is trying to find a set or sequence of actions that will achieve a goal We are thus looking at a type of goal-based agent
70
Assumptions Environment is static Environment is fully observable Environment is discrete Environment is deterministic
71
Search Big Idea: Search allows exploring alternatives Background Uninformed Vs Informed Any Path Vs Optimal Path Implementation and Performance
72
Trees and Graphs B is parent of C C is child of B A is ancestor of C C is descendant of A A Node (Vertex) Link (edge) Terminal (leaf) Tree root B C Directed Graph (One-way Street) Undirected Graph (2-way Street)
73
Examples of Graphs RWB GRW ISB GUJ LHR Airlines Routes Planning actions (graphs of possible states of the world) ABC C AB A C B B C A A C B
74
Problem Solving Paradigm 1.What are the states? (All relevant aspect of the problem) –Arrangement of parts (to plan and assembly) –Positions of trucks (to plan package distribution) –City (to plan a trip) –Set of facts (e.g. to prove geometry theorem ) 2.What are the actions (Operators)? (Deterministic and discrete) –Assemble two parts –Move a truck to a new position –Fly to a new city –Apply a theorem to derive new fact 3.What is the goal test? (Conditions for success ) –All parts in place –All packages delivered –Reached destination city –Derived goal fact
75
Graph Search as Tree Search Trees are directed graphs without cycles and nodes having <=1 parent We can turn graph search problems (from S to G) into tree search problems by: Replacing undirected links by 2 directed links Avoiding loops in path (or keeping track of visited nodes globally) C A S D G B S A DC CG D CG B G
76
Classes of Search Any PathDepth-First Systematic exploration of whole tree Uninformed Breadth-First unit a goal node is found. ClassNameOperation Any PathBest-First Uses heuristic measure of goodness informed of a state, e.g. estimated distance to goal OptimalUniform-Cost Uses path “ length” measure. Uninformed Finds “Shortest” path. OptimalA* Uses path “ length” measure and heuristic. Informed Finds “Shortest” path.
77
Terminology State –Used to refer to the vertices of the underlying graph that is being searched, that is, states in the problem domain, for example, a city, an arrangement of blocks or the arrangement of parts in a puzzle. Search Node – Refers to the vertices of the search tree which is being generated by the search algorithm. Each node refers to a state of the world; many nodes may refer to the same state. Importantly, a node implicitly represents path (from the state of the search to the search to the state associated with node). Because search nodes are part of a search tree, they have unique ancestor node (except for the root node).
78
Simple Search Algorithm A search node is a path from some state X to the state, e.g. (X B A S) The state of a search node is the most recent state of the path e.g. X. Let Q be a list of search nodes, e.g. (X B A S)(C B A S)….) Let S be the start state. 1.Initialize Q with search node (S) as only entry, set Visited =(S) 2.If Q is empty, fail. Else, pick some search node N from Q 3.If State (N) is a goal, return N (We’ve reached the goal) 4.(Otherwise) Remove N from Q 5.Find all the descendants of state (N) not in visited and create all the one-step extensions of N to each descendant. 6.Add the extended paths to Q; add children of state (N) to visited 7.Go Step 2. Critical Decisions: Step2: picking N from Q Step6: adding extensions of N to Q.
79
Implementing the Search Strategies Depth-first: Pick first element of Q Add path extensions to front of Q Breadth-first: Pick first element of Q Add path extensions to end of Q
80
Testing for the Goal This algorithm stops (in step 3) when state (N) = G or, in general, when state (N) satisfies the goal test. We could have performed this test in step 6 as each extended path is added to Q. This would catch termination earlier and be perfectly correct for the searches we have covered so far. However, performing the test in step 6 will be incorrect for the optimal searched. We have chosen to leave the test in step 3 to maintain uniformity with these future searches.
81
Terminology Visited –a state M is first visited when a path to M first gets added to Q. In general a state is said to have been visited if it has ever shown up in a search node in Q. The intuition is that we have briefly “visited” them to place them on Q, but we have not generated its descendant. Expanded – a state M is expanded when it is the state of a search node that is pulled off of Q. At that point the descendants of M are visited and the path that led to M is extended to the eligible descendants. In principle, a state may be expanded multiple times. We sometimes refer to the search node that led to M (instead of M itself) as being expanded. However, once a node is expanded we are done with it; we will not need to expand it again. In fact, we discard it from Q.. This distinction plays a key role in our discussion of the various search algorithms; study it carefully.
82
Visited States Keeping track of visited states generally improves time efficiency when searching graphs, without affecting correctness. –however, that substantial additional space may be required to keep track of visited states. If all we want to do is find a path from the start to the goal, there is no advantage to adding a search node whose state is already the state of another search node. Any state reachable from the node the second time would have been reachable from that node the first time. Note that, when using Visited, each state will only ever have most one path to it (search node) in Q. We’ll have to revisit this issue when we look at optimal searching.
83
Implementation Issues: The Visited list Although we speak of a Visited list, this is never the preferred implementation. If the graph states are known ahead of time as an explicit set, then space is allocated in the state itself to keep a mark; which makes both adding to Visited and checking if a state is Visited a constant time operation. Alternatively, as is more common in AI, if the states are generated on the fly, then hash table may be used for efficient detection of previously visited states. Note that in any case the incremental space cost of a Visited list will be proportional to the number of states which can be very high in some problems.
84
Implementing the Search Strategies Depth-first: Pick first element of Q Add path extensions to front of Q Breadth-first: Pick first element of Q Add path extensions to front of Q Uniform Cost: Pick “best” (measured by heuristic value of state) element of Q Add path extensions anywhere in Q (it may be more efficient to keep the Q ordered in some way so as to make it easier to find the “best” element).
85
Cost and Performance Later
86
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1 2 3 4 5
87
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2 3 4 5 1
88
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3 4 5 1 2
89
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(C A S) (D A S) (B S) C,D,B, A, S 4 5 1 2 3
90
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(C A S) (D A S) (B S) C,D,B, A, S 4(D A S) (B S)C,D,B,A, S 5 1 2 3 4
91
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(C A S) (D A S) (B S) C,D,B, A, S 4(D A S) (B S)C,D,B,A, S 5(G D A S) (B S)G, C, D, B, A, S 1 2 3 4 5
92
Depth – First Pick first element of Q; Add path extensions to front of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(C A S) (D A S) (B S) C,D,B, A, S 4(D A S) (B S)C,D,B,A, S 5(G D A S) (B S)G, C, D, B, A, S 1 2 3 4 5
93
Depth – First Another (easier?) way to see it Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited C A S D G B 1 s a B 1
94
Depth – First Another (easier?) way to see it Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited C A S D G B 1 s A B 1 C D 2 2
95
Depth – First Another (easier?) way to see it Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited C A S D G B 1 s A B 1 C D 2 2 3 3
96
Depth – First Another (easier?) way to see it Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited C A S D G B 1 s A B 1 C D 2 2 3 3 C D 4 4
97
Depth – First Another (easier?) way to see it Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited C A S D G B 1 s A B 1 C D 2 2 3 3 C D 4 4 5 5
98
Depth – First (without Visited List) Pick first element of Q; Add path extensions to front of Q C A S D G B Q 1(S) 2(A S) (B S) 3(C A S) (D A S) (B S) 4(D A S) (B S) 5(G D A S) (B S) 1 2 3 4 6 5
99
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2 3 4 5 6
100
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3 4 5 6 1
101
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(C A S) (D A S) (B S)C,D,B, A, S 4 5 6 1 2
102
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)G,C,D,B,A, S 5 6 1 2 3
103
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)*G,C,D,B,A, S 5 6 1 2 3
104
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)*G,C,D,B,A, S 5(D A S) (G B S)G,C,D,B, A, S 6 1 2 3 4
105
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)*G,C,D,B,A, S 5(D A S) (G B S)G,C,D,B, A, S 6 1 2 3 4 5
106
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)*G,C,D,B,A, S 5(D A S) (G B S)G,C,D,B, A, S 6(G B S)G,C,D,B,A,S 1 2 3 4 5 6
107
Breadth – First Pick first element of Q; Add path extensions to end of Q C A S D G B QVisited 1(S)S 2(A S) (B S)A, B, S 3(B S)(C A S) (D A S)C,D,B, A, S 4(C A S)(D A S) (G B S)*G,C,D,B,A, S 5(D A S) (G B S)G,C,D,B, A, S 6(G B S)G,C,D,B,A,S 1 2 3 4 5 6
108
Breadth – First Another easier way to see it C A S D G B 1 2 3 4 5 6 Numbers indicate order pulled off of Q (expanded) Dark blue fill=Visited & expanded Light gray fill = Visited s A B 1 C D 2 3 4 5 D C 6 NB: D is not Visited again
109
Breadth – First (without Visited List) Pick first element of Q; Add path extensions to end of Q C A S D G B Q 1(S) 2(A S) (B S) 3(B S)(C A S) (D A S) 4(C A S)(D A S) (D B S) (G B S)* 5(D A S) (D B S) (G B S) 6(D B S) (G B S)(C D A S)(G D A S) 7(G B S)(C D A S)(G D A S)(C D B S) (G D B S) 1 2 3 4 6 5 7
110
Simple Search Algorithm A search node is a path form some state X to the start state, e.g., (X B A S) The state of the search node is the most recent state of the path.eg.X Let Q be a list of search nodes, e.g. ((X B A S)(C B A S)……) Let S be the start state. 1.Initialize Q with the search node (S) as only entry; set visited = (S) 2.If Q is empty, fail. Else, pick some partial path N from Q. 3.If state (N) is a goal, return N (We’ve reached a goal) 4.(Otherwise remove N from Q) 5.Find all the children of state (N) not in Visited and create all the one-step extensions of N to each descendent. 6.Add all the extended paths to Q; add children to (N) to Visited 7.Go to step 2. Critical decisions: step2: picking N from Q step6: adding extensions of N to Q
111
Simple Search Algorithm A search node is a path form some state X to the start state, e.g., (X B A S) The state of the search node is the most recent state of the path.eg.X Let Q be a list of search nodes, e.g. ((X B A S)(C B A S)……) Let S be the start state. 1.Initialize Q with the search node (S) as only entry; set visited = (S) 2.If Q is empty, fail. Else, pick some partial path N from Q. 3.If state (N) is a goal, return N (We’ve reached a goal) 4.(Otherwise remove N from Q) 5.Find all the children of state (N) not in Visited and create all the one-step extensions of N to each descendent. 6.Add all the extended paths to Q; add children to (N) to Visited 7.Go to step 2. Critical decisions: step2: picking N from Q step6: adding extensions of N to Q
112
Simple Search Algorithm (M) search node is a path form some state to the start state, e.g., (X B A S) The state of the search node is the most recent state of the path.eg.X Let Q be a list of search nodes, e.g. ((X B A S)(C B A S)……) Let S be the start state. 1.Initialize Q with the search node (S) as only entry 2.If Q is empty, fail. Else, pick some search node N from Q 3.If state (N) is a goal, return N (We’ve reached a goal) 4.(Otherwise remove N from Q) 5.Find all the children of state (N), all the one-step extensions of N to each descendent. 6.Add all the extended paths to Q 7.Go to step 2. Critical decisions: step2: picking N from Q step6: adding extensions of N to Q
113
Why not a Visited List? For the any-path algorithms, the visited would not cause us to fail to find a path when one existed, since the path to a state do not matter. However, the visited list in connection with UC can cause us to miss the best path. The shortest path from S to G is (S A D G) But, on extending (S), A and D would be added to Visited list and so (S A) would not be extended to (S A D) A S DG 2 1 4
114
Implementing Optimal Search Strategies Uniform Cost: Pick best (measured by path length) element of Q Add path extensions anywhere in Q.
115
Uniform Cost Like best-first except that it uses the “total length (cost)” of a path instead of a heuristic value for the state. Each link has a “length” or “cost” (which is always greater than 0) We want “shortest” or “least cost” path D A S C B G 2 2 2 3 4 5 5 1
116
Uniform Cost Like best-first except that it uses the “total length (cost)” of a path instead of a heuristic value for the state. Each link has a “length” or “cost” (which is always greater than 0) We want “shortest” or “least cost” path A S C D B G Total path cost: (S A C) A S C D B 2 2 2 3 4 5 5 1 4
117
Uniform Cost Like best-first except that it uses the “total length (cost)” of a path instead of a heuristic value for the state. Each link has a “length” or “cost” (which is always greater than 0) We want “shortest” or “least cost” path A S C D B G Total path cost: (S A C) (S B D G) A S C D B 2 2 2 3 4 5 5 1 4 8
118
Uniform Cost Like best-first except that it uses the “total length (cost)” of a path instead of a heuristic value for the state. Each link has a “length” or “cost” (which is always greater than 0) We want “shortest” or “least cost” path A S C D B G Total path cost: (S A C) (S B D G) (S A D C) A S C D B 2 2 2 3 4 5 5 1 4 8 9
119
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 2 3 4 5 5 1 Q 1(0 S)
120
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 2 3 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 2 1
121
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 2 3 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 2 1 3
122
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 2 3 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 4(6 D A S) (5 B S) 2 1 3 4
123
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 23 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 4(6 D A S) (5 B S) 5(6 D B S) (10 G B S) (6 D A S) 2 1 3 4 5
124
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 23 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 4(6 D A S) (5 B S) 5(6 D B S) (10 G B S) (6 D A S) 6(8 G D B S) (9 C D B S) (10 G B S) (6 D A S) 2 1 3 4 5
125
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 23 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 4(6 D A S) (5 B S) 5(6 D B S) (10 G B S) (6 D A S) 6(8 G D B S) (9 C D B S) (10 G B S) (6 D A S) 7(8 G D A S) (9 C D A S) (8 G D B S) (9 C D B S) (10 G B S) 2 1 3 4 5,6 7
126
Uniform Cost Pick best (by path length) element of Q. Add path extensions anywhere in Q A S C D B G A S C D B 2 2 23 4 5 5 1 Q 1(0 S) 2(2 A S) (5 B S) 3(4 C A S) (6 D A S) (5 B S) 4(6 D A S) (5 B S) 5(6 D B S) (10 G B S) (6 D A S) 6(8 G D B S) (9 C D B S) (10 G B S) (6 D A S) 7(8 G D A S) (9 C D A S) (8 G D B S) (9 C D B S) (10 G B S) 2 1 3 4 5,6 7
127
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost UC enumerates paths in order of total path cost!
128
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2
129
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2
130
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3
131
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3 4
132
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3 4 5 5
133
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3 4 5 5,6 6
134
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3 4 5 5,6 6 7 7
135
Uniform Cost Another (easier?) way to see it A S C D B G A S C D B 2 2 3 4 5 5 1 S A C D GC B DG C G 2 6 9 4 8 6 9 8 10 5 Total Path cost Order Pulled off of Q(expanded) UC enumerates paths in order of total path cost! 1 1 2 2 2 3 3 4 5 5,6 6 7 7
136
With queuing Function queuing-fn puts children at front of open list BFS uses FIFO queue, DFS uses LIFO stack Net effect is to follow leftmost path to the bottom, then incrementally backtrack - also expand deepest node first
137
Odometer An instrument that indicates distance traveled by a vehicle.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.