Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Similar presentations


Presentation on theme: "Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization."— Presentation transcript:

1 Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization

2 Categories of Information Processing -- Problem-Solving -- Game-Playing -- Theorem-Proving -- Logic-Deduction

3 Section 5.1 Problem-Solving §5-1 Introduction Description of a problem: Problem defining: the start - goal conditions Rule defining: a set of IF-THEN Strategy-finding: rule-application controlling

4 Example The Water Jug Problem Initial Base: There are 2 jugs, a 4-kilo jug and a 3-kilo jug. Neither has any measurement marks on it. Rule Base: (1) There is a pump that can be used to fill the jug with water, or (2) You can pour water from jug on the ground or into another jug. Question: How to get exactly 2-kilo of water into the 4-kilo jug ?

5 Representation and Solution: Kilos in 4-kilo jug kilos in 3-kilo jug 0 0 0 3 3 0 3 3 4 2 0 2 2 0 R1 R2 R1 3 R2 1 2 2

6 It is clear that the Production System is suitable means of representation for Problem-Solving. Procedure PRODUCTION 1. DATA initial database 2. Until DATA satisfies the termination condition, do: i) begin ii) select some rule, R, in the set of rules that can be applied to DATA iii) DATA result of applying R to DATA

7 In most of AI applications, the information available to the control strategy is usually not sufficient to permit selection of the most appropriate rule on every stage. The operation of AI production system can thus be characterized as a SEARCH PROCESS in which rules are tried until some sequence of them is found that produces a database satisfying the termination condition Further, if the database of the problem to be solved is represented by means of graph, the search is called GRAPH-SEARCH.

8 Procedure GRAPH-SEARCH 1. Create a search graph, G, consisting sole of the start node, S. Put S on OPEN (OPEN: a list of nodes just generated but not examined yet). 2. Create a list, CLOSED, that is initially empty. (CLOSED is a list of nodes examined already) 3. LOOP: if OPEN is empty, exit with failure. 4. Select the first node on OPEN, remove it from OPEN to CLOSED, Call it node n. 5. Examine n: if n is a goal node, exit with success. The solution is obtained by tracing a path along the pointers from n back to S in G.

9 6. Expand n (Apply a rule to n), generating the set, M, of its successors that are not ancestors of n. Install these members of M as successors of n. 7. Establish a pointer to n from these members of M that were not already on either OPEN or CLOSED. Add there members of M to OPEN. For each member of M that was already on OPEN or CLOSED, decide whether or not to redirect its pointer to n. For each member of M already on CLOSED, decide for each of its descendants in G whether or not to redirect its pointer. 8. Reorder the list OPEN according to a certain rule. 9. Go LOOP

10 S Node on CLOSED Node on OPEN 1=n 2 3 5 4 6 Pointers need to be redirected to Redirection

11 The crucial factor in search process is the ordering regulation that determines the fashion of the selection of nodes for expansion next. The search efficiency is dependent on the utility of problem information in node selection. In accord with the utility of problem information in node selection, search strategy can be divided into: a) Blind Search, and b) Heuristic Search

12 §5-1-1 Blind Search on Tree 1) Breadth-First Search Node ordering: FIFO Procedure BFS 1. Put start node s on OPEN. Set pointer P=0 2. If OPEN=NIL, failure. 3. Select the first node on OPEN. Put it on CLOSED, Call it node n. 4. If n=g, successful. The solution is obtained by tracing a path along the pointers from g to s in G. 5. Expand n. If it has no successors, go to step 2. Or, generate all its successors, add them successively to the end on OPEN, and establish pointers from them to n; go to step 2.

13 Example: BFS 123 8 4 765 283 164 7 5 1=S 46=g 283 164 75 283 164 75 283 1 4 765 283 64 175 2 3 4 5 6 7 8 9 1011 34 35363738 283 14 765 2 3 184 765 283 14 765 283 16 75419 See Nilsson p.71 Fu p.37 The shortest solution path 453944 2033 83 264 175 283 6 4 175 83 214 765 283 714 65 23 184 765 23 184 765 28 143 765 283 145 76 283 1 6 754 28 163 754 8 3 264 175 123 84 765 2 3 684 175 283 64 175 283 674 1 5 8 3 214 765 283 714 6 5 83 264 175 40414243 234 18 765 2 8 143 765 283 145 7 6 2 8 163 754 2 3 186 754 283 156 7 4 16 283 754 213222232425262728293031

14 Comments on BFS: It is guaranteed to find a optimal solution because of its systematic search feature. The major weakness with BFS is its inability to use the information related to the problem and thus a) It requires a large memory to store the great number of the nodes; b) It require a great amount of work to examine the great number of the nodes c) As result, BFS has low efficiency of search.

15 § 5-1-2 Depth First Search: Node Ordering: LIFO Procedure DFS 1. Put start node s on OPEN. Set d(s)=0, P=0 2. If OPEN=NIL, F. 3. Select the first node on OPEN. Put it on CLOSED. Call it node n. 4. If n=g, S. 5. If d(n)=d, go to step 2. 6. If n is not expandable, go to step 2. 7. Expand node n, generating all its successors. Establish pointers from them to n. Let d(successor)=d(n)+1. Add them to the front on OPEN in any order, then go to step 2. B

16 Example: DFS d =5 283 184 765 283 164 7 5 1=S 283 164 75 283 164 75 283 1 4 765 283 64 175 2 3 4 5 67 8 9 1011 283 14 765 2 3 765 283 714 765 283 14 765 See Nilsson p.70 Fu p.42 The solution path B 83 264 175 18 83 264 175 863 2 4 175 8 3 264 175 12 1314 15 1617 83 214 8 3 214 83 214 813 2 4 765 19 20 21 2223 6 4 175 684 175 23 684 175 23 684 175 64 175 28 643 175 283 645 17 283 674 1 5 283 674 15 283 674 15 2 3 765 283 65 283 714 6 5 283 7 4 615 283 714 65 123 8 4 765 123 784 65 8 4 23 184 765 123 84 765 24 25 2627 28 29 30 31

17 Compared with BFS, DFS has the following features: 1) If d is too small, the goal node may be missed, if too large, the greater amount of storage is needed. 2) DFS may find the goal faster than BFS, while the the solution path may not be the shortest one if there are more than one goal node. 3) DFS can often be carried out using a reasonably small amount of storage. B g g

18 §5-1-3 Informed (Heuristic) Search on Tree (1) General Remarks -- The weakness in blind search: ignoring the information associated with the problem in selecting node for expansion next. -- Solution: Try to use the heuristic information in node ordering on OPEN -- Heuristic Search. -- The heuristic information is used in terms of Evaluation Function, f(.): f(n): node n Real number mapping nodes to their promising values.

19 For any node n on a tree, let g*(n) be the actual cost of a minimal cost path from s to n. h*(n) be the cost of minimal cost path from n to g. f*(n) = g*(n) + h*(n) be the cost of an optimal path from s to g constrained to going through n. Let again g be an estimation of g*, h be an estimation of h*, and f(n) = g(n) + h(n) be an estimation of f*(n), which can be used as an evaluation function for ordering nodes on OPEN.

20 Practically, g(n): the sum of the arc costs encountered while tracing the pointers from n to s. h(n): a function based on heuristic information from the problem, hence is called Heuristic Function. The practical regulation is If h(n) is very high, node n may be ignored; If h(n) is low, node n may be chosen to expand next.

21 (2) Algorithm A and Algorithm A* on Tree Algorithm A is a special Tree-Search using evaluation function f(n) for ordering nodes on OPEN and always selecting for expansion the node with the lowest value of f(n). The key for Algorithm A is the settings of h and g: When h=0, g=d, it reduces to BFS; h=0, g=0, random search; h=1/d, g=0, DFS; h>h*, the optimal path may be lost; h<h*, some search may be redundant, but optimal path can be found.

22 Algorithm A with h(n)<h*(n), for all n, is Algorithm A*. Thus BFS is Algorithm A* and A* can always find a minimal length path to a goal. Informed-ness of Algorithm A*: A 1 *: f 1 (n) = g 1 (n) + h 1 (n) A 2 *: f 2 (n) = g 2 (n) + h 2 (n) with h 1 (n) < h*(n), h 2 (n) < h*(n) A 2 * is more informed than A 1 *, iff h*(n) > h 2 (n) > h 1 (n) for all non-goal node n.

23 Example of A*: 8-Puzzle Problem Let f(n) = g(n) + h(n) = d(n) + w(n) d(n): depth of node n on search tree w(n): number of misplaced digits at node n 283 184 283 164 7 5 283 164 75 164 75 283 1 4 765 283 14 765 283 765 283 14 765 83 184 214 23 765 714 65 123 8 4 765 123 784 65 8 4 23 184 765 123 84 765 1# 0+4=4 2# 3# 4# 6# 7# 8# 12# 13# 14#15# 26# 27# 1+5=6 1+3=4 1+5=6 2+3=5 2+4=6 3+3=6 3+4=7 3+2=5 3+4=7 4+1=5 5+0=5 13 out of 27

24 Algorithm A* with h(n) = w(n) is more informed than BFS which uses h(n) = 0. h(n) = w(n) is a lower bound on the exact number of steps needed to reach the goal, h*(n), hence it is an Algorithm A*. However, w(n) does not provide good-enough estimate of h*(n). The information of following order is not utilized in w(n).

25 A better estimate of h*(n) is h(n) = P(n) + 3S(n) P(n): the sum of the absolute distances that a digit is from home; S(n): a sequence score, taking value 2, for each non-central digit not proper followed 1, if a digit in the center 0, for other digits E.g., for s = and g =, we have 216 4 8 753 123 8 4 765 P(s) = (3x1) + (3x2) + (1x3) + (1x0) = 12 (1,2,5) (3,4,8) (6) (7) S(s) = 8x2 = 16

26 By using this h(n), we have f(n) = g(n) + P(n) +3S(n) with g(n) = d(n) and the above problem will find the same optimal path but with fewer nodes expanded: 283 184 283 164 7 5 283 164 75 164 75 283 1 4 765 283 14 765 283 14 765 184 23 765 123 8 4 765 123 784 65 8 4 23 184 765 123 84 765 11 out of 13

27 Since 0 < w(n) < h*(n) < P(n) + 3S(n), the solution path found happens to be of minimal path, although we were not guaranteed of finding an optimal path. Summary: From the example above, we can see that the key in heuristic search is to determine the form of estimation function f(n) by utilizing heuristic information. As is seen, the crucial difference between blind search and heuristic search is the ordering regulation. In Heuristic search, the node with the smallest value of evaluation function will be chosen to be expanded first.

28 (3) Algorithm A* for Graph Search 1. s OPEN. Set g(s)=0, f(s)=h(s)=whatever, P=0, CLOSED=NIL 2. If OPEN=NIL, F 3. Take first node on OPEN, call it Best-Node (BN), BN CLOSED 4. If BN=g, S 5. If BN not expandable, go to step 2 6. Expand BN, generating successors (SUCs) and do: (1) Set P: SUC BN (2) Compute g(SUC)=g(BN)+g(BN, SUC) (3) If SUC=old node (OLD) on OPEN, add OLD to the list of BNs successors

29 If g(SUC)<g(OLD), the OLDs parent link should be reset to point to BN, record g(SUC) in the place of g(OLD) If g(SUC)>g(OLD), do nothing (4) If SUC= old node (OLD) on CLOSED, add OLD to the list of BNs successors; do the same thing as in step 6(3), set the parent link and g and f values appropriately; However, if g(SUC)<g(OLD), the improvement must be propagate to OLDs successors. 7. Go to step 2.

30 (4) Heuristic Power The total cost of heuristic search consists of two parts: (a) Path cost = (path length) x unit length cost (b) Search cost spent for searching the solution path (a) (b) Costs Informed-ness

31 (5) Measures of Heuristic Performance (a) Penetrance, P, of A Search P is the extent to which the search has focused to a goal, rather than wandered off: P = L / T where L: the length of the path found to the goal T: the total number of nodes generated during the search (including the goal node but not including the start node) Hence, P can be considered as a measure of search efficiency.

32 (b) Effective Branching factor, B, of A Search B is defined by the equation: B + B + … + B = T (total number of nodes) Hence T = 2L (B - 1) B L B - 1 P = = L T L (B - 1) B(B - 1) L Where the assumptions are made: (1) The depth of the search tree = the length of path L (2) T = the number of nodes generated during search (3) B is a constant for all nodes in the tree

33 Home-Works 1. Propose two h functions for the traveling salesman problem. Is either these h functions a lower bound on h*? Which of them would result in more efficient search ? Apply algorithm A with these h functions to the TSP problem. 2. Use the evaluation function f(n)=d(n)+w(n) with algorithm A to search backward from the goal to the start node in the 8-puzzle problem. 3. Discuss ways in which an h function might be improved during a search.

34 § 5-2 Game-Playing: AND/OR Graph Search I. Game-Playing and AO Graph Two-Person-Zero-Sum-Perfect Information Games Example: Grundys Game Two Players, Max and Min, have a pile of pennies. The first player, Min, divides the original pile into two piles that must be unequal. Each player alternatively thereafter does the same to some single pile when it is his turn to play. The game proceeds until every pile has either just one penny or two. The player who first can not play is the loser.

35 7; Min 5,2;Max6,1;Max4,3;Max 5,1,1;Min4,2,1;Min3,2,2;Min3,3,1;Min 3,2,1,1;Max4,1,1,1;Max2,2,2,1;Max 2,2,1,1,1;Min3,1,1,1,1;Min 2,1,1,1,1,1;Max Wining path for Max AND/OR Graph

36 From Maxs point of view, a win must be obtainable from all of Mins successors and from at least one of Maxs successors. It is an AND/OR graph. In AND/OR graphs, there are hyper-arcs connecting a parent node with a set of its successors. The hyper-arcs are called connectors. Each k-connector is directed from a parent node to a set of k successor nodes.

37 II. Features of AND/OR Graph Search The choice of the node to expand next must depend not only on the f value of that node itself, but also on whether that node is part of the current best path from the initial node. E.g., A B CD h=5 3 4 9 A BCD JIHGFE 5103434 17 18 9 9 20

38 Thus, to search an AND/OR graph, it needs to do three things at each step: 1) Traverse the graph, starting at the initial node and following the current best path, and accumulate the set of nodes that are on that path and havent been expanded. 2) Pick one of these nodes and expand it. Add to the graph its successors, compute f for each of them. 3) Change the f value of the newly expanded node to reflect the new information provided by successors. Propagate this change backward through the graph. At each node that is visited while going up the graph, decide which of its successor arcs is the most promising and mark it as part of the current best path.

39 A A B B CD CD E F 345 9 6 44 10 9 34 A B C D G H EF 57 44 4 6 12 This may cause the current best path to change.

40 Thus, an important feature in AND/OR graph search is that one must search a solution graph each time from the start node to the terminal node and needs to frequently check to see if the start node solvable. The definition of solvable node in AND/OR graph: 1) Terminal node is solvable node; 2) An OR node is solvable iff at least one of its successors is solvable; 3) An AND node is solvable iff all its successors solvable.

41 III. Procedure AO* (1) Create a search graph, G, consisting solely of the start node, s. Compute h(s). If s is a terminal node, h(s)=0, label s SOLVED. (2) Until s is labeled SOLVED, do: (3) Begin (4) Compute a partial solution graph, G, in G by tracing down the marked connectors in G from s. (5) Select any non-terminal tip node, n, of G. (6) Expand n generating all its successors and install these in G as successors of n. For each successor, n, not already occurring in G, computing h(n ). Label SOLVED any of these successors that are terminal nodes. jj

42 (7) Create a singleton set of nodes, S, consisting just of node n. (8) Until S is empty, do: (9) Begin (10) Remove from S a node m such that m has no descendants in G occurring in S. (11) Revise the cost h(m) for m, as follows: For each connector directed from m to a set of nodes {n, …, n }, compute h (m)=c + h(n ) + …+h(n ). Set h(m) to the minimum over all outgoing connectors of h (m) and mark the connector through which this minimum is achieved, erasing the previous marking if different. liki ii 1iki i

43 If all of the successors through this connector are labeled SOLVED, then label node m SOLVED. (12) If m has been marked SOLVED, or if the revised cost of m is different than its just previous cost, then add to S all these parent of m such that m is one of their successors through a marked connector. (13) End (14) End

44 IV Search The Game Tree: MinMax Procedure 1. Localized Solution It is usually impossible to decide a best move based on an entire search of a whole tree due to the nature of combinatorial explosion. Instead, we must merely try to find a good first move based on local search that is segmented by the artificial termination conditions. After search artificially terminated, the estimation of the best first move can be made by applying a static evaluation function to the tips of the search tree.

45 2. Some Conventions Two person, zero-sum, complete information games: (a) 2 players: Max and Min (b) Try to find a wining strategy for Max. (c) Max moves first, and alternatively thereafter. (d) The top node of a game tree is of depth 0. (e) Nodes at even-numbered depths are called Max nodes in which it is Maxs move next. (f) The artificial termination condition is a certain depth of search given previously. (g) Game positions favorable to Max cause evaluation function to have positive values, while positions favorable to Min cause f to have negative values.

46 Rules: (a) If Max were to choose among tip nodes, he chooses the node having largest evaluation. Thus, the Max node is assigned a back-up value equal to the maximum of the evaluation of the tip nodes. (b) If Min were to choose among tip nodes, he chooses the node having smallest evaluation. Thus the Min node is assigned a back-up value equal to minimum of the evaluations of the tip nodes. (c) After the parents of all tip nodes have been assigned back-up values, we back up values another level. (d) Continue to back up values, level by level, until all successors of the start node are assigned backed-up values.

47 Example: Tic-Tac-Toe Game The player who first places his pieces in a straight line in the matrix is the winner. Suppose that Max is marked by while Min by ; Max plays first. A BFS is conducted with some revisions: -- Artificial termination condition: depth bound = 2; --A static evaluation function for position p is defined: N( ) - N( ), if p isnt a wining position e(p) = 0, if p is a wining position for Max -0, if p is a wining position for Min where N( ) is the number of complete lines open for. 0 0

48 The process is assumed to be shown below: Max Node Min Node Best Move -2 1 1 6-55-5 6-55-54-5 5-65-5 5-66-6 4-6 5-46-4

49 4-33-35-33-34-3 4-23-25-23-2 4-2 3-2 4-3 3-3 1 1 00 1 1 Best Move for Max Another Best Move 4-2 5-23-24-2

50 Best Move 1 -o 2-13-12-13-1 3-22-2 3-2 A B C D 2-13-1 2-23-22-2 2-1 o o oo o o o o

51 4. The Procedure MinMax Procedure need to be improved: It separates completely the tree generation process with position evaluation. Only after tree generation completed does position evaluation begin. This may result in a grossly inefficient strategy. See the last figure. If a tip node is evaluated as soon as it is generated, then after node A is generated and evaluated, there is no need in generating and evaluating nodes B, C, D. Min will choose A immediately. We can assign As parent the back-up value of -o and proceed with the search without generating B,C,D and their successors. o

52 Another Possible Saving Consider the first step: Suppose that -- DFS is employed, Artificial stopping condition: d = 2. -- Whenever a tip node is generated, its evaluation is computed. -- Whenever a position can be given a back-up value, this value is computed. Start Node (Max) Lower Bound Upper Bound -2 6-55-56-55-54-5 4-6 B Node A B

53 Consider the situation: After node A and all its successors have been generated but before node B is generated. The backed-up value of the start node is bound from below by -1, the lower bound, an value for the start node. Start Node (Max) Lower Bound Upper Bound 6-55-56-55-54-5 4-6 Node A B Next, B and its first successor are generated. The back-up value of node B is bounded from above by -2, an upper bound, a value. Because <, we can discontinue search below B. -2

54 It is obvious that: (a) The values of Max nodes can never decrease, and (b) The b values of Min nodes can never increase. Thus we have the rules: (1) Search can be discontinued below any Min node having a value of its Max node ancestors. The final backed-up value of this Min node can be set to its value. (2) Search can be discontinued below any Max node having a value of its Min node ancestors. The final backed-up value of this Max node can be set to its value.

55 The values of and are computed, during search: (1) The value of a Max node is set equal to the current largest final backed-up value of its successors (2) The value of a Min node is set equal to the current smallest final backed-up value of its successors. When search is discontinues under rule (1), we call it an cutoff, while under rule (2), a -cutoff. The procedure terminates when all successors of the start node have been given final backed-up values, and the best first move is then the one creating that successor having the highest backed-up value.

56 Employing procedure always results in finding a move that is equally as good as the move that would have been found by the simple MiniMax method searching to the same depth. The only difference is that the procedure find a best first move usually after much less search. Efficiency of Procedure In order to perform procedure, at least some part of the search tree must be generated to maximum depth, because and values must be based on the static values of tip nodes. Hence, DFS is usually used.

57 The final backed-up value of the start node is identical to the static value of one of the tip nodes. If this tip node could be reached first in a DFS, the number of cutoffs could be maximum. Suppose that a tree has depth D, and every node (except tip node) has exactly B successors. Such a tree will have precisely B tip nodes. Suppose again an procedure ideally generates successors in the order of their true backed-up values -- the lowest valued successor first for Min node and the largest valued successor first for Max. Then the number of the tip nodes N needed for the ideal cutoffs can be given as D D

58 N = 2B -1, even D = B + B -1, odd D D/2 (D+1/2)(D-1)/2 Which is about the same as the number of tip nodes that would have been generated at depth D/2 without cutoffs.

59 Exercises 1. The game Nim is played as follows: Two players alternate in removing one, two, or three pennies from a stack initially containing five pennies. The player who picks up the last penny loses. Show, by drawing the game graph, that the player who has the second move can always win. Can you think of a simple characterization of the wining strategy ?


Download ppt "Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization."

Similar presentations


Ads by Google