CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.

Slides:



Advertisements
Similar presentations
Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.
Advertisements

Chapter 4: Informed Heuristic Search
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
Review: Search problem formulation
CS344: Principles of Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 7, 8, 9: Monotonicity 16, 17 and 19 th Jan, 2012.
An Introduction to Artificial Intelligence
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
Solving Problem by Searching
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture– 4, 5, 6: A* properties 9 th,10 th and 12 th January,
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 16, 17- Completeness Proof; Self References and.
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture–2,3: A* 3 rd and 5 th January, 2012.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Dijkstra’s Algorithm: Notes to Complement.
Informed Search Idea: be smart about what paths to try.
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 1 - Introduction.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
Informed search algorithms
CS 344 Artificial Intelligence By Prof: Pushpak Bhattacharya Class on 15/Jan/2007.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
CSC3203: AI for Games Informed search (1) Patrick Olivier
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
Informed Search CSE 473 University of Washington.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
Searching for Solutions
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 13– Search 17 th August, 2010.
0 The animation which I am proposing here will be a 2D animation and can be developed in JAVA or Flash.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lectures 18, 19, 20– A* Monotonicity 2 nd, 6 th and 7 th September, 2010.
CS344: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 1 to 8– Introduction, Search, A* (has the associated lab course: CS386)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
Review: Tree search Initialize the frontier using the starting state
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Dijkstra’s Algorithm Run by hand Dijkstra's Algorithm (as stated in slide 68 at on the example.
CS 4100 Artificial Intelligence
CS344: Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
CS621: Artificial Intelligence
CS621: Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
CSE 473 University of Washington
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance of more informed heuristic, Effect of Monotone Restriction or Triangular Inequality) [Main Ref: Principle of AI by N.J. Nilsson]

General Graph search Algorithm S AACB F ED G Graph G = (V,E) A C B DE F G

1) Open List : S (Ø, 0) Closed list : Ø 2) OL : A (S,1), B (S,3), C (S,10) CL : S 3) OL : B (S,3), C (S,10), D (A,6) CL : S, A 4) OL : C (S,10), D (A,6), E (B,7) CL: S, A, B 5) OL : D (A,6), E (B,7) CL : S, A, B, C 6) OL : E (B,7), F (D,8), G (D, 9) CL : S, A, B, C, D 7) OL : F (D,8), G (D,9) CL : S, A, B, C, D, E 8) OL : G (D,9) CL : S, A, B, C, D, E, F 9) OL : Ø CL : S, A, B, C, D, E, F, G

Steps of GGS (principles of AI, Nilsson,) 1. Create a search graph G, consisting solely of the start node S; put S on a list called OPEN. 2. Create a list called CLOSED that is initially empty. 3. Loop: if OPEN is empty, exit with failure. 4. Select the first node on OPEN, remove from OPEN and put on CLOSED, call this node n. 5. if n is the goal node, exit with the solution obtained by tracing a path along the pointers from n to s in G. (ointers are established in step 7). 6. Expand node n, generating the set M of its successors that are not ancestors of n. Install these memes of M as successors of n in G.

GGS steps (contd.) 7. Establish a pointer to n from those members of M that were not already in G (i.e., not already on either OPEN or CLOSED). Add these members of M to OPEN. For each member of M that was already on OPEN or CLOSED, decide whether or not to redirect its pointer to n. For each member of M already on CLOSED, decide for each of its descendents in G whether or not to redirect its pointer. 8. Reorder the list OPEN using some strategy. 9. Go LOOP.

Algorithm A A function f is maintained with each node f(n) = g(n) + h(n), n is the node in the open list Node chosen for expansion is the one with least f value

Algorithm A* One of the most important advances in AI g(n) = least cost path to n from S found so far h(n) <= h*(n) where h*(n) is the actual cost of optimal path to G(node to be found) from n S n G g(n) h(n) “ Optimism leads to optimality ”

Admissibility: An algorithm is called admissible if it always terminates and terminates in optimal path Theorem: A* is admissible. Lemma: Any time before A* terminates there exists on OL a node n such that f(n) <= f*(s) Observation: For optimal path s → n 1 → n 2 → … → g 1.h*(g) = 0, g*(s)=0 and 2.f*(s) = f*(n 1 ) = f*(n 2 ) = f*(n 3 )… = f*(g) A* Algorithm- Properties

f*(n i ) = f*(s),n i ≠ s and n i ≠ g Following set of equations show the above equality: f*(n i ) = g*(n i ) + h*(n i ) f*(n i+1 ) = g*(n i+1 ) + h*(n i+1 ) g*(n i+1 ) = g*(n i ) + c(n i, n i+1 ) h*(n i+1 ) = h*(n i ) - c(n i, n i+1 ) Above equations hold since the path is optimal. A* Properties (contd.)

Admissibility of A* A* always terminates finding an optimal path to the goal if such a path exists. Intuition S g(n) n h(n) G (1) In the open list there always exists a node n such that f(n) <= f*(S). (2) If A* does not terminate, the f value of the nodes expanded become unbounded. 1) and 2) are together inconsistent Hence A* must terminate

Lemma Any time before A* terminates there exists in the open list a node n' such that f(n') <= f*(S) S n1n1 n2n2 G Optimal path For any node n i on optimal path, f(n i ) = g(n i ) + h(n i ) <= g*(n i ) + h*(n i ) Also f*(n i ) = f*(S) Let n' be the first node in the optimal path that is in OL. Since all parents of n' have gone to CL, g(n') = g*(n') and h(n') <= h*(n') => f(n') <= f*(S)

If A* does not terminate Let e be the least cost of all arcs in the search graph. Then g(n) >= e.l(n) where l(n) = # of arcs in the path from S to n found so far. If A* does not terminate, g(n) and hence f(n) = g(n) + h(n) [h(n) >= 0] will become unbounded. This is not consistent with the lemma. So A* has to terminate.

2 nd part of admissibility of A* The path formed by A* is optimal when it has terminated Proof Suppose the path formed is not optimal Let G be expanded in a non-optimal path. At the point of expansion of G, f(G) = g(G) + h(G) = g(G) + 0 > g*(G) = g*(S) + h*(S) = f*(S) [f*(S) = cost of optimal path] This is a contradiction So path should be optimal

Better Heuristic Performs Better

Theorem A version A 2 * of A* that has a “better” heuristic than another version A 1 * of A* performs at least “as well as” A 1 * Meaning of “better” h 2 (n) > h 1 (n) for all n Meaning of “as well as” A 1 * expands at least all the nodes of A 2 * h*(n) h 2 *(n) h 1 *(n) For all nodes n, except the goal node

Proof by induction on the search tree of A 2 *. A* on termination carves out a tree out of G Induction on the depth k of the search tree of A 2 *. A 1 * before termination expands all the nodes of depth k in the search tree of A 2 *. k=0. True since start node S is expanded by both Suppose A 1 * terminates without expanding a node n at depth (k+1) of A 2 * search tree. Since A 1 * has seen all the parents of n seen by A 2 * g 1 (n) <= g 2 (n) (1)

k+1 S G Since A 1 * has terminated without expanding n, f 1 (n) >= f*(S) (2) Any node whose f value is strictly less than f*(S) has to be expanded. Since A 2 * has expanded n f 2 (n) <= f*(S) (3) From (1), (2), and (3) h 1 (n) >= h 2 (n) which is a contradiction. Therefore, A 1 * has to expand all nodes that A 2 * has expanded. Exercise If better means h 2 (n) > h 1 (n) for some n and h 2 (n) = h 1 (n) for others, then Can you prove the result ?

Monotone Restriction or Triangular Inequality of the Heuristic Function Statement: if monotone restriction (also called triangular inequality) is satisfied, then for nodes in the closed list, redirection of parent pointer is not necessary. In other words, if any node ‘n’ is chosen for expansion from the open list, then g(n)=g(n * ), where g(n) is the cost of the path from the start node ‘s’ to ‘n’ at that point of the search when ‘n’ is chosen, and g(n * ) is the cost of the optimal path from ‘s’ to ‘n’. A heuristic h(p) is said to satisfy the monotone restriction, if for all ‘p’, h(p)<=h(p c )+cost(p, p c ), where ‘p c ’ is the child of ‘p’.

Proof Let S-N 1 - N 2 - N 3 - N 4... N m …N k be an optimal path from S to N k (all of which might or might not have been explored). Let N m be the last node on this path which is on the open list, i.e., all the ancestors from S up to N m-1 are in the closed list.

Proof (contd.) For every node N p on the optimal path, g*(N p )+h(N p )<= g*(N p )+C(N p,N p+1 )+h(N p+1 ), by monotone restriction g*(N p )+h(N p )<= g*(N p+1 )+h(N p+1 ) on the optimal path g*(N m )+ h(N m )<= g*(N k )+ h(N k ) by transitivity Since all ancestors of N m in the optimal path are in the closed list, g (N m )= g*(N m ) => f(N m )= g(N m )+ h(N m )= g*(N m )+ h(N m )<= g*(N k )+ h(N k )

Proof (contd.) For every node N p on the optimal path, g*(N p )+h(N p )<= g*(N p )+C(N p,N p+1 )+h(N p+1 ), by monotone restriction g*(N p )+h(N p )<= g*(N p+1 )+h(N p+1 ) on the optimal path g*(N m )+ h(N m )<= g*(N k )+ h(N k ) by transitivity Since all ancestors of N m in the optimal path are in the closed list, g (N m )= g*(N m ) => f(N m )= g(N m )+ h(N m )= g*(N m )+ h(N m )<= g*(N k )+ h(N k )

Proof (contd.) Now if N k is chosen in preference to N m, f(N k ) <= f(N m ) g(N k )+ h(N k ) <= g(N m )+ h(N m ) = g*(N m )+ h(N m ) <= g*((N k )+ h(N k ) Hence, g(N k )<=g*(N k ) But g(N k )>=g*(N k ), by definition Hence g(N k )=g*(N k ) --proved

Relationship between Monotone Restriction and Admissibility MR=>Admissibility, but not vice versa i.e., if a heuristic h(p) satisfies the monotone restriction, for all ‘p’, h(p)<=h(p c )+cost(p, p c ), where ‘p c ’ is the child of ‘p’, then h*(p)<=h*(p), for all p

Forward proof Let p  n 1  n 2  n 3  …n k-1  G =n k, be the optimal from p to G By definition, h(G)=0 Since p  n 1  n 2  n 3  …n k-1  G =n k is the optimal path from p to G, C(n 1,n 2 )+c(n 2,n 3 )+…+c(n k-1,n k )=h*(p)

Forward proof (contd.) Now by M.R. h(p)<=h(n 1 )+c(p,n 1 ) h(n 1 )<=h(n 2 )+c(n 1, n 2 ) h(n 2 )<=h(n 3 )+c(n 2, n 3 ) h(n 3 )<=h(n 4 )+c(n 3, n 4 ) … h(n k-1 )<=h(G)+c(n k-1, G) h(G)=0; summing the inequalities, h(p)<=C(n 1,n 2 )+c(n 2,n 3 )+…+c(n k-1,n k )=h*(p); proved Backward proof, by producing a counter example.

Lab assignment Implement A* algorithm for the following problems: 8 puzzle Missionaries and Cannibals Robotic Blocks world Specifications: Try different heuristics and compare with baseline case, i.e., the breadth first search. Violate the condition h ≤ h*. See if the optimal path is still found. Observe the speedup.