Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.

Slides:



Advertisements
Similar presentations
3.6 Support Vector Machines
Advertisements

Problems and Search Chapter 2.
BEST FIRST SEARCH - BeFS
Constraint Satisfaction Problems
Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.
Informed search algorithms
Artificial Intelligence: Knowledge Representation
February 21, 2002 Simplex Method Continued
The 5S numbers game..
Heuristic Search Russell and Norvig: Chapter 4 Slides adapted from:
Chapter 4: Informed Heuristic Search
Cooperating Intelligent Systems
Heuristics, and what to do if you dont know what to do Carl Hultquist.
CMSC 471 Fall 2002 Class #5-6 – Monday, September 16 / Wednesday, September 18.
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
Informed search algorithms
CPSC 322, Lecture 7Slide 1 Heuristic Search Computer Science cpsc322, Lecture 7 (Textbook Chpt 3.5) January, 19, 2009.
Review: Search problem formulation
CS344: Principles of Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 7, 8, 9: Monotonicity 16, 17 and 19 th Jan, 2012.
Heuristic Search. Best First Search A* Heuristic Search Heuristic search exploits additional knowledge about the problem that helps direct search to.
Informed Search Algorithms
Heuristic Search Ref: Chapter 4.
Heuristic Search techniques
Data Structures Using C++
Hash Tables.
Outline Minimum Spanning Tree Maximal Flow Algorithm LP formulation 1.
Chapter 5 Plan-Space Planning.
1 Breadth First Search s s Undiscovered Discovered Finished Queue: s Top of queue 2 1 Shortest path from s.
David Luebke 1 8/25/2014 CS 332: Algorithms Red-Black Trees.
Lecture 4: Informed/Heuristic Search
Informed Search Problems
Informed search algorithms
1 Search Problems (read Chapters 3 and 4 of Russell and Norvig) Many (perhaps most) AI problems can be considered search problems. This can be modeled.
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Depth-First and Breadth-First Search CS 5010 Program Design Paradigms “Bootcamp” Lesson 9.2 TexPoint fonts used in EMF. Read the TexPoint manual before.
12 System of Linear Equations Case Study
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
PSSA Preparation.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 14, 2012.
1 State-Space representation and Production Systems Introduction: what is State-space representation? What are the important trade-offs? (E.Rich, Chapt.2)
State Space Representation and Search
Heuristic Search Jim Little UBC CS 322 – Search 4 September 17, 2014 Textbook §
Traveling Salesperson Problem
Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Review: Search problem formulation
Using Search in Problem Solving
Informed Search Idea: be smart about what paths to try.
Informed Search Strategies
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Lecture 3: Uninformed Search
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 13– Search 17 th August, 2010.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
BEST FIRST SEARCH -OR Graph -A* Search -Agenda Search CSE 402
CS621: Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
A General Backtracking Algorithm
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization

Categories of Information Processing -- Problem-Solving -- Game-Playing -- Theorem-Proving -- Logic-Deduction

Section 5.1 Problem-Solving §5-1 Introduction Description of a problem: Problem defining: the start - goal conditions Rule defining: a set of IF-THEN Strategy-finding: rule-application controlling

Example The Water Jug Problem Initial Base: There are 2 jugs, a 4-kilo jug and a 3-kilo jug. Neither has any measurement marks on it. Rule Base: (1) There is a pump that can be used to fill the jug with water, or (2) You can pour water from jug on the ground or into another jug. Question: How to get exactly 2-kilo of water into the 4-kilo jug ?

Representation and Solution: Kilos in 4-kilo jug kilos in 3-kilo jug R1 R2 R1 3 R

It is clear that the Production System is suitable means of representation for Problem-Solving. Procedure PRODUCTION 1. DATA initial database 2. Until DATA satisfies the termination condition, do: i) begin ii) select some rule, R, in the set of rules that can be applied to DATA iii) DATA result of applying R to DATA

In most of AI applications, the information available to the control strategy is usually not sufficient to permit selection of the most appropriate rule on every stage. The operation of AI production system can thus be characterized as a SEARCH PROCESS in which rules are tried until some sequence of them is found that produces a database satisfying the termination condition Further, if the database of the problem to be solved is represented by means of graph, the search is called GRAPH-SEARCH.

Procedure GRAPH-SEARCH 1. Create a search graph, G, consisting sole of the start node, S. Put S on OPEN (OPEN: a list of nodes just generated but not examined yet). 2. Create a list, CLOSED, that is initially empty. (CLOSED is a list of nodes examined already) 3. LOOP: if OPEN is empty, exit with failure. 4. Select the first node on OPEN, remove it from OPEN to CLOSED, Call it node n. 5. Examine n: if n is a goal node, exit with success. The solution is obtained by tracing a path along the pointers from n back to S in G.

6. Expand n (Apply a rule to n), generating the set, M, of its successors that are not ancestors of n. Install these members of M as successors of n. 7. Establish a pointer to n from these members of M that were not already on either OPEN or CLOSED. Add there members of M to OPEN. For each member of M that was already on OPEN or CLOSED, decide whether or not to redirect its pointer to n. For each member of M already on CLOSED, decide for each of its descendants in G whether or not to redirect its pointer. 8. Reorder the list OPEN according to a certain rule. 9. Go LOOP

S Node on CLOSED Node on OPEN 1=n Pointers need to be redirected to Redirection

The crucial factor in search process is the ordering regulation that determines the fashion of the selection of nodes for expansion next. The search efficiency is dependent on the utility of problem information in node selection. In accord with the utility of problem information in node selection, search strategy can be divided into: a) Blind Search, and b) Heuristic Search

§5-1-1 Blind Search on Tree 1) Breadth-First Search Node ordering: FIFO Procedure BFS 1. Put start node s on OPEN. Set pointer P=0 2. If OPEN=NIL, failure. 3. Select the first node on OPEN. Put it on CLOSED, Call it node n. 4. If n=g, successful. The solution is obtained by tracing a path along the pointers from g to s in G. 5. Expand n. If it has no successors, go to step 2. Or, generate all its successors, add them successively to the end on OPEN, and establish pointers from them to n; go to step 2.

Example: BFS =S 46=g See Nilsson p.71 Fu p.37 The shortest solution path

Comments on BFS: It is guaranteed to find a optimal solution because of its systematic search feature. The major weakness with BFS is its inability to use the information related to the problem and thus a) It requires a large memory to store the great number of the nodes; b) It require a great amount of work to examine the great number of the nodes c) As result, BFS has low efficiency of search.

§ Depth First Search: Node Ordering: LIFO Procedure DFS 1. Put start node s on OPEN. Set d(s)=0, P=0 2. If OPEN=NIL, F. 3. Select the first node on OPEN. Put it on CLOSED. Call it node n. 4. If n=g, S. 5. If d(n)=d, go to step If n is not expandable, go to step Expand node n, generating all its successors. Establish pointers from them to n. Let d(successor)=d(n)+1. Add them to the front on OPEN in any order, then go to step 2. B

Example: DFS d = =S See Nilsson p.70 Fu p.42 The solution path B

Compared with BFS, DFS has the following features: 1) If d is too small, the goal node may be missed, if too large, the greater amount of storage is needed. 2) DFS may find the goal faster than BFS, while the the solution path may not be the shortest one if there are more than one goal node. 3) DFS can often be carried out using a reasonably small amount of storage. B g g

§5-1-3 Informed (Heuristic) Search on Tree (1) General Remarks -- The weakness in blind search: ignoring the information associated with the problem in selecting node for expansion next. -- Solution: Try to use the heuristic information in node ordering on OPEN -- Heuristic Search. -- The heuristic information is used in terms of Evaluation Function, f(.): f(n): node n Real number mapping nodes to their promising values.

For any node n on a tree, let g*(n) be the actual cost of a minimal cost path from s to n. h*(n) be the cost of minimal cost path from n to g. f*(n) = g*(n) + h*(n) be the cost of an optimal path from s to g constrained to going through n. Let again g be an estimation of g*, h be an estimation of h*, and f(n) = g(n) + h(n) be an estimation of f*(n), which can be used as an evaluation function for ordering nodes on OPEN.

Practically, g(n): the sum of the arc costs encountered while tracing the pointers from n to s. h(n): a function based on heuristic information from the problem, hence is called Heuristic Function. The practical regulation is If h(n) is very high, node n may be ignored; If h(n) is low, node n may be chosen to expand next.

(2) Algorithm A and Algorithm A* on Tree Algorithm A is a special Tree-Search using evaluation function f(n) for ordering nodes on OPEN and always selecting for expansion the node with the lowest value of f(n). The key for Algorithm A is the settings of h and g: When h=0, g=d, it reduces to BFS; h=0, g=0, random search; h=1/d, g=0, DFS; h>h*, the optimal path may be lost; h<h*, some search may be redundant, but optimal path can be found.

Algorithm A with h(n)<h*(n), for all n, is Algorithm A*. Thus BFS is Algorithm A* and A* can always find a minimal length path to a goal. Informed-ness of Algorithm A*: A 1 *: f 1 (n) = g 1 (n) + h 1 (n) A 2 *: f 2 (n) = g 2 (n) + h 2 (n) with h 1 (n) < h*(n), h 2 (n) < h*(n) A 2 * is more informed than A 1 *, iff h*(n) > h 2 (n) > h 1 (n) for all non-goal node n.

Example of A*: 8-Puzzle Problem Let f(n) = g(n) + h(n) = d(n) + w(n) d(n): depth of node n on search tree w(n): number of misplaced digits at node n # 0+4=4 2# 3# 4# 6# 7# 8# 12# 13# 14#15# 26# 27# 1+5=6 1+3=4 1+5=6 2+3=5 2+4=6 3+3=6 3+4=7 3+2=5 3+4=7 4+1=5 5+0=5 13 out of 27

Algorithm A* with h(n) = w(n) is more informed than BFS which uses h(n) = 0. h(n) = w(n) is a lower bound on the exact number of steps needed to reach the goal, h*(n), hence it is an Algorithm A*. However, w(n) does not provide good-enough estimate of h*(n). The information of following order is not utilized in w(n).

A better estimate of h*(n) is h(n) = P(n) + 3S(n) P(n): the sum of the absolute distances that a digit is from home; S(n): a sequence score, taking value 2, for each non-central digit not proper followed 1, if a digit in the center 0, for other digits E.g., for s = and g =, we have P(s) = (3x1) + (3x2) + (1x3) + (1x0) = 12 (1,2,5) (3,4,8) (6) (7) S(s) = 8x2 = 16

By using this h(n), we have f(n) = g(n) + P(n) +3S(n) with g(n) = d(n) and the above problem will find the same optimal path but with fewer nodes expanded: out of 13

Since 0 < w(n) < h*(n) < P(n) + 3S(n), the solution path found happens to be of minimal path, although we were not guaranteed of finding an optimal path. Summary: From the example above, we can see that the key in heuristic search is to determine the form of estimation function f(n) by utilizing heuristic information. As is seen, the crucial difference between blind search and heuristic search is the ordering regulation. In Heuristic search, the node with the smallest value of evaluation function will be chosen to be expanded first.

(3) Algorithm A* for Graph Search 1. s OPEN. Set g(s)=0, f(s)=h(s)=whatever, P=0, CLOSED=NIL 2. If OPEN=NIL, F 3. Take first node on OPEN, call it Best-Node (BN), BN CLOSED 4. If BN=g, S 5. If BN not expandable, go to step 2 6. Expand BN, generating successors (SUCs) and do: (1) Set P: SUC BN (2) Compute g(SUC)=g(BN)+g(BN, SUC) (3) If SUC=old node (OLD) on OPEN, add OLD to the list of BNs successors

If g(SUC)<g(OLD), the OLDs parent link should be reset to point to BN, record g(SUC) in the place of g(OLD) If g(SUC)>g(OLD), do nothing (4) If SUC= old node (OLD) on CLOSED, add OLD to the list of BNs successors; do the same thing as in step 6(3), set the parent link and g and f values appropriately; However, if g(SUC)<g(OLD), the improvement must be propagate to OLDs successors. 7. Go to step 2.

(4) Heuristic Power The total cost of heuristic search consists of two parts: (a) Path cost = (path length) x unit length cost (b) Search cost spent for searching the solution path (a) (b) Costs Informed-ness

(5) Measures of Heuristic Performance (a) Penetrance, P, of A Search P is the extent to which the search has focused to a goal, rather than wandered off: P = L / T where L: the length of the path found to the goal T: the total number of nodes generated during the search (including the goal node but not including the start node) Hence, P can be considered as a measure of search efficiency.

(b) Effective Branching factor, B, of A Search B is defined by the equation: B + B + … + B = T (total number of nodes) Hence T = 2L (B - 1) B L B - 1 P = = L T L (B - 1) B(B - 1) L Where the assumptions are made: (1) The depth of the search tree = the length of path L (2) T = the number of nodes generated during search (3) B is a constant for all nodes in the tree

Home-Works 1. Propose two h functions for the traveling salesman problem. Is either these h functions a lower bound on h*? Which of them would result in more efficient search ? Apply algorithm A with these h functions to the TSP problem. 2. Use the evaluation function f(n)=d(n)+w(n) with algorithm A to search backward from the goal to the start node in the 8-puzzle problem. 3. Discuss ways in which an h function might be improved during a search.

§ 5-2 Game-Playing: AND/OR Graph Search I. Game-Playing and AO Graph Two-Person-Zero-Sum-Perfect Information Games Example: Grundys Game Two Players, Max and Min, have a pile of pennies. The first player, Min, divides the original pile into two piles that must be unequal. Each player alternatively thereafter does the same to some single pile when it is his turn to play. The game proceeds until every pile has either just one penny or two. The player who first can not play is the loser.

7; Min 5,2;Max6,1;Max4,3;Max 5,1,1;Min4,2,1;Min3,2,2;Min3,3,1;Min 3,2,1,1;Max4,1,1,1;Max2,2,2,1;Max 2,2,1,1,1;Min3,1,1,1,1;Min 2,1,1,1,1,1;Max Wining path for Max AND/OR Graph

From Maxs point of view, a win must be obtainable from all of Mins successors and from at least one of Maxs successors. It is an AND/OR graph. In AND/OR graphs, there are hyper-arcs connecting a parent node with a set of its successors. The hyper-arcs are called connectors. Each k-connector is directed from a parent node to a set of k successor nodes.

II. Features of AND/OR Graph Search The choice of the node to expand next must depend not only on the f value of that node itself, but also on whether that node is part of the current best path from the initial node. E.g., A B CD h= A BCD JIHGFE

Thus, to search an AND/OR graph, it needs to do three things at each step: 1) Traverse the graph, starting at the initial node and following the current best path, and accumulate the set of nodes that are on that path and havent been expanded. 2) Pick one of these nodes and expand it. Add to the graph its successors, compute f for each of them. 3) Change the f value of the newly expanded node to reflect the new information provided by successors. Propagate this change backward through the graph. At each node that is visited while going up the graph, decide which of its successor arcs is the most promising and mark it as part of the current best path.

A A B B CD CD E F A B C D G H EF This may cause the current best path to change.

Thus, an important feature in AND/OR graph search is that one must search a solution graph each time from the start node to the terminal node and needs to frequently check to see if the start node solvable. The definition of solvable node in AND/OR graph: 1) Terminal node is solvable node; 2) An OR node is solvable iff at least one of its successors is solvable; 3) An AND node is solvable iff all its successors solvable.

III. Procedure AO* (1) Create a search graph, G, consisting solely of the start node, s. Compute h(s). If s is a terminal node, h(s)=0, label s SOLVED. (2) Until s is labeled SOLVED, do: (3) Begin (4) Compute a partial solution graph, G, in G by tracing down the marked connectors in G from s. (5) Select any non-terminal tip node, n, of G. (6) Expand n generating all its successors and install these in G as successors of n. For each successor, n, not already occurring in G, computing h(n ). Label SOLVED any of these successors that are terminal nodes. jj

(7) Create a singleton set of nodes, S, consisting just of node n. (8) Until S is empty, do: (9) Begin (10) Remove from S a node m such that m has no descendants in G occurring in S. (11) Revise the cost h(m) for m, as follows: For each connector directed from m to a set of nodes {n, …, n }, compute h (m)=c + h(n ) + …+h(n ). Set h(m) to the minimum over all outgoing connectors of h (m) and mark the connector through which this minimum is achieved, erasing the previous marking if different. liki ii 1iki i

If all of the successors through this connector are labeled SOLVED, then label node m SOLVED. (12) If m has been marked SOLVED, or if the revised cost of m is different than its just previous cost, then add to S all these parent of m such that m is one of their successors through a marked connector. (13) End (14) End

IV Search The Game Tree: MinMax Procedure 1. Localized Solution It is usually impossible to decide a best move based on an entire search of a whole tree due to the nature of combinatorial explosion. Instead, we must merely try to find a good first move based on local search that is segmented by the artificial termination conditions. After search artificially terminated, the estimation of the best first move can be made by applying a static evaluation function to the tips of the search tree.

2. Some Conventions Two person, zero-sum, complete information games: (a) 2 players: Max and Min (b) Try to find a wining strategy for Max. (c) Max moves first, and alternatively thereafter. (d) The top node of a game tree is of depth 0. (e) Nodes at even-numbered depths are called Max nodes in which it is Maxs move next. (f) The artificial termination condition is a certain depth of search given previously. (g) Game positions favorable to Max cause evaluation function to have positive values, while positions favorable to Min cause f to have negative values.

Rules: (a) If Max were to choose among tip nodes, he chooses the node having largest evaluation. Thus, the Max node is assigned a back-up value equal to the maximum of the evaluation of the tip nodes. (b) If Min were to choose among tip nodes, he chooses the node having smallest evaluation. Thus the Min node is assigned a back-up value equal to minimum of the evaluations of the tip nodes. (c) After the parents of all tip nodes have been assigned back-up values, we back up values another level. (d) Continue to back up values, level by level, until all successors of the start node are assigned backed-up values.

Example: Tic-Tac-Toe Game The player who first places his pieces in a straight line in the matrix is the winner. Suppose that Max is marked by while Min by ; Max plays first. A BFS is conducted with some revisions: -- Artificial termination condition: depth bound = 2; --A static evaluation function for position p is defined: N( ) - N( ), if p isnt a wining position e(p) = 0, if p is a wining position for Max -0, if p is a wining position for Min where N( ) is the number of complete lines open for. 0 0

The process is assumed to be shown below: Max Node Min Node Best Move

Best Move for Max Another Best Move

Best Move 1 -o A B C D o o oo o o o o

4. The Procedure MinMax Procedure need to be improved: It separates completely the tree generation process with position evaluation. Only after tree generation completed does position evaluation begin. This may result in a grossly inefficient strategy. See the last figure. If a tip node is evaluated as soon as it is generated, then after node A is generated and evaluated, there is no need in generating and evaluating nodes B, C, D. Min will choose A immediately. We can assign As parent the back-up value of -o and proceed with the search without generating B,C,D and their successors. o

Another Possible Saving Consider the first step: Suppose that -- DFS is employed, Artificial stopping condition: d = Whenever a tip node is generated, its evaluation is computed. -- Whenever a position can be given a back-up value, this value is computed. Start Node (Max) Lower Bound Upper Bound B Node A B

Consider the situation: After node A and all its successors have been generated but before node B is generated. The backed-up value of the start node is bound from below by -1, the lower bound, an value for the start node. Start Node (Max) Lower Bound Upper Bound Node A B Next, B and its first successor are generated. The back-up value of node B is bounded from above by -2, an upper bound, a value. Because <, we can discontinue search below B. -2

It is obvious that: (a) The values of Max nodes can never decrease, and (b) The b values of Min nodes can never increase. Thus we have the rules: (1) Search can be discontinued below any Min node having a value of its Max node ancestors. The final backed-up value of this Min node can be set to its value. (2) Search can be discontinued below any Max node having a value of its Min node ancestors. The final backed-up value of this Max node can be set to its value.

The values of and are computed, during search: (1) The value of a Max node is set equal to the current largest final backed-up value of its successors (2) The value of a Min node is set equal to the current smallest final backed-up value of its successors. When search is discontinues under rule (1), we call it an cutoff, while under rule (2), a -cutoff. The procedure terminates when all successors of the start node have been given final backed-up values, and the best first move is then the one creating that successor having the highest backed-up value.

Employing procedure always results in finding a move that is equally as good as the move that would have been found by the simple MiniMax method searching to the same depth. The only difference is that the procedure find a best first move usually after much less search. Efficiency of Procedure In order to perform procedure, at least some part of the search tree must be generated to maximum depth, because and values must be based on the static values of tip nodes. Hence, DFS is usually used.

The final backed-up value of the start node is identical to the static value of one of the tip nodes. If this tip node could be reached first in a DFS, the number of cutoffs could be maximum. Suppose that a tree has depth D, and every node (except tip node) has exactly B successors. Such a tree will have precisely B tip nodes. Suppose again an procedure ideally generates successors in the order of their true backed-up values -- the lowest valued successor first for Min node and the largest valued successor first for Max. Then the number of the tip nodes N needed for the ideal cutoffs can be given as D D

N = 2B -1, even D = B + B -1, odd D D/2 (D+1/2)(D-1)/2 Which is about the same as the number of tip nodes that would have been generated at depth D/2 without cutoffs.

Exercises 1. The game Nim is played as follows: Two players alternate in removing one, two, or three pennies from a stack initially containing five pennies. The player who picks up the last penny loses. Show, by drawing the game graph, that the player who has the second move can always win. Can you think of a simple characterization of the wining strategy ?