HEURISTIC SEARCH Heuristics:

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed search algorithms
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Problem Solving by Searching
Informed search.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
HEURISTIC SEARCH Heuristics: Rules for choosing the branches in a state space that are most likely to lead to an acceptable problem solution. Used when:
1 search CS 331/531 Dr M M Awais HEURISTIC SEARCH Heuristics: Rules for choosing the branches in a state space that are most likely to lead to an acceptable.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Pag.1 Artificial Intelligence Informed search algorithms Chapter 4, Sections 1-5.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Artificial Intelligence & Expert Systems Heuristic Search.
Artificial intelligence 1: informed search. 2 Outline Informed = use problem-specific knowledge Which search strategies?  Best-first search and its variants.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Search Introduction to Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Solving Problems by Searching
Informed Search.
Presentation transcript:

HEURISTIC SEARCH Heuristics: Rules for choosing the branches in a state space that are most likely to lead to an acceptable problem solution. Rules that provide guidance in decision making Often improves the decision making: Shopping: Choosing the shortest queue in a supermarket does not necessarily means that you will out of the market earlier Used when: Information has inherent ambiguity computational costs are high CS 331/531 Dr M M Awais

Finding Heuristics Tic - Tac - Toe Heuristic: which one to choose? X calculate winning lines and move to state with most wining lines. CS 331/531 Dr M M Awais

Calculating winning lines X X X 3 winning lines 4 winning lines 3 winning lines Always choose the state with maximum heuristic value Maximizing heuristics CS 331/531 Dr M M Awais

Choosing city with minimum distance: Always choose the city with minimum heuristic value Minimizing heuristics (hi<hi-1) CS 331/531 Dr M M Awais

Algorithms for Heuristic Search Hill Climbing Best first search A* Algo CS 331/531 Dr M M Awais

Simplified Algorithm: Hill Climbing: If the Node is better, only then you proceed to that Node When is a node better?: Apply heuristic to compare the nodes Simplified Algorithm: 1. Start with current-state (cs) = initial state 2. Until cs = goal-state or there is no change in the cs do: (a) Get the successor of cs and use the EVALUATION FUNCTION to assign a score to each successor (b) If one of the successor has a better score than cs then set the new state to be the successor with the best score. CS 331/531 Dr M M Awais

Navigation Problem Choose the closest city to travel CS 331/531 Dr M M Awais

Navigation Problem Choose the closest city to travel hi < hi-1 CS 331/531 Dr M M Awais

Navigation Problem GET STUCK No Backtracking Choose the closest city to travel CS 331/531 Dr M M Awais

Hill Climbing F:6 B:5 D:4 E:2 G:0 Goal/ Solution C:3 A:10 Node label: heuristic value of the node A:10 A is node name, 10 is the heuristic evaluation CS 331/531 Dr M M Awais

Hill Climbing F:6 B:5 D:4 E:2 G:0 Goal/ Solution C:3 A:10 Compare B with C Node label: heuristic value of the node A:10 A is node name, 10 is the heuristic evaluation CS 331/531 Dr M M Awais

Hill Climbing F:6 B:5 D:4 E:2 G:0 Goal/ Solution C:3 A:10 C is better so move F is poor than C So gets stuck at C Node label: heuristic value of the node A:10 A is node name, 10 is the heuristic evaluation CS 331/531 Dr M M Awais

Hill-climbing search “is a loop that continuously moves in the direction of increasing value” It terminates when a peak is reached. Hill climbing does not look ahead of the immediate neighbors of the current state. Hill-climbing chooses randomly among the set of best successors, if there is more than one. Hill-climbing a.k.a. greedy local search CS 331/531 Dr M M Awais

Hill-climbing search: Algo function HILL-CLIMBING( problem) return a state that is a local maximum input: problem, a problem local variables: current, a node. neighbor, a node. current  MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor  a highest valued successor of current if VALUE [neighbor] ≤ VALUE[current] then return STATE[current] current  neighbor CS 331/531 Dr M M Awais

Hill-climbing example 8-queens problem (complete-state formulation). Successor function: move a single queen to another square in the same column. Heuristic function h(n): the number of pairs of queens that are attacking each other (directly or indirectly). CS 331/531 Dr M M Awais

Hill-climbing example a) shows a state of h=17 and the h-value for each possible successor. b) A local minimum in the 8-queens state space (h=1). CS 331/531 Dr M M Awais

Drawback Ridge = sequence of local maxima difficult for hill climbing to navigate Plateaux = an area of the state space where the evaluation function is flat. GETS STUCK 86% OF THE TIME. CS 331/531 Dr M M Awais

Hill-climbing variations Stochastic hill-climbing Random selection among the uphill moves. The selection probability can vary with the steepness of the uphill move. First-choice hill-climbing Stochastic hill climbing by generating successors randomly until a better one is found. Random-restart hill-climbing Tries to avoid getting stuck in local maxima/minima. CS 331/531 Dr M M Awais

First Reading Assignment on Simulated Annealing The method to get rid of the local minima problem You can consult the Text Book CS 331/531 Dr M M Awais

More on Heuristic functions 8-puzzle Avg. solution cost is about 22 steps (branching factor +/- 3) Exhaustive search to depth 22: 3.1 x 1010 states. A good heuristic function can reduce the search process. CAN YOU THINK OF A HEURISTIC ? CS 331/531 Dr M M Awais

8 Puzzle Two commonly used heuristics h1 = the number of misplaced tiles h1(s)=8 h2 = the sum of the distances of the tiles from their goal positions (manhattan distance). h2(s)=3+1+2+2+2+3+3+2=18 CS 331/531 Dr M M Awais

Heuristic quality Effective branching factor b* Is the branching factor that a uniform tree of depth d would have in order to contain N+1 nodes. A good heuristic should have b* as low as possible This measure is fairly constant for sufficiently hard problems. Can thus provide a good guide to the heuristic’s overall usefulness. A good value of b* is 1. CS 331/531 Dr M M Awais

Heuristic quality and dominance h1 h2 h1 h2 Effective branching factor of h2 is lower than h1 If h2(n) >= h1(n) for all n then h2 dominates h1 and is better for search (value of h2 and h1 e.g. 18 vs 8) CS 331/531 Dr M M Awais

Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic CS 331/531 Dr M M Awais

Navigation Problem Shows Actual Road Distances CS 331/531 Dr M M Awais

Heuristic: Straight Line Distance between cities and goal city (aerial) Actual Road Distances Is the new heuristic Admissible CS 331/531 Dr M M Awais

Heuristic: Straight Line Distance between cities goal city (aerial) Is the new heuristic Admissible hsld(n)<=h*(n) Consider n= sibiu hsld(sibiu)=253 h*(sibiu)=80+97+101=278 (actual cost through Piesti) h*(sibiu)=99+211=310 (actual cost through Fagaras) hsld<=h* Admissible (never overestimates the actual road distance) CS 331/531 Dr M M Awais

Which Heuristic is Admissible? h1 = the number of misplaced tiles h1(s)=8 h2 = the sum of the distances of the tiles from their goal positions (manhattan distance). h2(s)=3+1+2+2+2+3+3+2=18 BOTH CS 331/531 Dr M M Awais

New Heuristic: Permutation Inversions 12 15 11 14 10 13 9 5 6 7 8 4 3 2 1 let the goal be: Let nI be the number of tiles J < I that appear after tile I (from left to right and top to bottom) h3 = n2 + n3 +  + n15 + row number of empty tile n2 = 0 n3 = 0 n4 = 0 n5 = 0 n6 = 0 n7 = 1 n8 = 1 n9 = 1 n10 = 4 n11 = 0 n12 = 0 n13 = 0 n14 = 0 n15 = 0 12 15 11 14 6 13 9 5 10 7 8 4 3 2 1  h3 = 7 + 4 IS h3 admissible CS 331/531 Dr M M Awais

New Heuristic: Permutation Inversions 12 15 11 14 10 13 9 5 6 7 8 4 3 2 1 IS h3 admissible h3 = 7 + 4 h* = actual moves required to achieve the goal If h3 <= h* then Admissible, Otherwise Not 12 15 11 14 6 13 9 5 10 7 8 4 3 2 1 Find out yourself CS 331/531 Dr M M Awais

New Heuristic: Permutation Inversions 5 4 7 1 2 6 8 3 STATE(Goal) 5 4 7 1 2 6 8 3 STATE(N) Is h3 admissible here: h3=3+1=4 3 (block 3 requires 3 jumps it should be before the empty block) 1(empty block is in row 1) h*=2 (actual moves) So h3 is not admissible CS 331/531 Dr M M Awais

Example STATE(N) GOAL STATE 1 4 7 5 2 6 3 8 STATE(N) x GOAL STATE Goal contains partial information, the sequence in the first row is important. CS 331/531 Dr M M Awais

Finding admissible heuristics Relaxed Problem: Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem: Relaxed 8-puzzle for h1 : a tile can move anywhere As a result, h1(n) gives the shortest solution Relaxed 8-puzzle for h2 : a tile can move to any adjacent square. As a result, h2(n) gives the shortest solution. The optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem. CS 331/531 Dr M M Awais

Relaxed Problem: Example By solving relaxed problems at each node In the 8-puzzle, the sum of the distances of each tile to its goal position (h2) corresponds to solving 8 simple problems: It ignores negative interactions among tiles Store the solution pattern in the database 1 4 7 5 2 6 3 8 CS 331/531 Dr M M Awais

Relaxed Problem: Example 1 4 7 5 2 6 3 8 8 5 6 h = h8 + h5 + h6 +… CS 331/531 Dr M M Awais

Complex: Relaxed Problem Consider two more complex relaxed problems: h = d1234 + d5678 [disjoint pattern heuristic] These distances could have been pre-computed in a database 1 4 7 5 2 6 3 8 CS 331/531 Dr M M Awais

Relaxed Problem Several order-of-magnitude speedups for 15- and 24-puzzle Problem have been obtained through the application of relaxed problem CS 331/531 Dr M M Awais

Finding admissible heuristics Find an admissible heuristic through experience: Solve lots of puzzles Inductive learning algorithm can be employed for predicting costs for new states that may arise during search. CS 331/531 Dr M M Awais

Summary: Admissible Heuristic Defining and Solving Sub-problem: Admissible heuristics can also be derived from the solution cost of a sub-problem of a given problem. This cost is a lower bound on the cost of the real problem. Pattern databases store the exact solution to for every possible sub-problem instance. The complete heuristic is constructed using the patterns in the DB Learning through experience CS 331/531 Dr M M Awais

Robot Navigation Other Examples xN yN N yg xg CS 331/531 Dr M M Awais

Robot Navigation Other Examples xN yN N yg xg (Euclidean distance) h2(N) = |xN-xg| + |yN-yg| (Manhattan distance) CS 331/531 Dr M M Awais

Best-First Search It exploits state description to estimate how promising each search node is An evaluation function f maps each search node N to positive real number f(N) Traditionally, the smaller f(N), the more promising N Best-first search sorts the nodes in increasing f [random order is assumed among nodes with equal values of f] “Best” only refers to the value of f, not to the quality of the actual path. Best-first search does not generate optimal paths in general CS 331/531 Dr M M Awais

Summary: Best-first search General approach: Best-first search: node is selected for expansion based on an evaluation function f(n) Idea: evaluation function measures distance to the goal. Choose node which appears best based on the heuristic value Implementation: A queue is sorted in decreasing order of desirability. Special cases: greedy search, A* search CS 331/531 Dr M M Awais

Evaluation function Heuristic Evaluation Same as Hill Climbing Heuristic Evaluation f(n)=h(n) = estimated cost of the cheapest path from node n to goal node. If n = goal then h(n)=f(n)=0 CS 331/531 Dr M M Awais

Best First Search Method Algo: 1. Start with agenda (priority queue) = [initial-state] 2. While agenda not empty do: (a) remove the best node from the agenda (b) if it is the goal node then return with success. Otherwise find its successors. ( c) Assign the successor nodes a score using the evaluation function and add the scored nodes to agenda CS 331/531 Dr M M Awais

Breadth - First Depth First Hill Climbing F:6 B:5 D:4 E:2 G:0 Solution CS 331/531 Dr M M Awais

Best First Search Method G:0 Solution C:3 A:10 1. Open [A:10] : closed [] Evaluate A:10; Open [C:3,B:5]; closed [A:10] Evaluate C:3; Open [B:5,F:6]; closed [C:3,A:10] Evaluate B:5; Open [E:2,D:4,F:6]; closed [C:3,B:5,A:10]. Evaluate E:2; 5 Open [G:0,D:4,F:6]; closed [E:2,C:3,B:5,A:10] Evaluate G:0; the solution / goal is reached CS 331/531 Dr M M Awais

Comments: Best First Search Method If the evaluation function is good best first search may drastically cut the amount of search requested otherwise. The first move may not be the best one. If the evaluation function is heavy / very expensive the benefits may be overweighed by the cost of assigning a score CS 331/531 Dr M M Awais

Romania with step costs in km hSLD=straight-line distance heuristic. hSLD can NOT be computed from the problem description itself In this example f(n)=h(n) Expand node that is closest to goal = Greedy best-first search CS 331/531 Dr M M Awais

Greedy search example Open=[Arad:366] Arad (366) Assume that we want to use greedy search to solve the problem of travelling from Arad to Bucharest. The initial state=Arad CS 331/531 Dr M M Awais

Greedy search example The first expansion step produces: Open=[Sibiu:253 , Tmisoara:329 , Zerind:374] Arad Zerind(374) Sibiu(253) Timisoara (329) The first expansion step produces: Sibiu, Timisoara and Zerind Greedy best-first will select Sibiu. CS 331/531 Dr M M Awais

Greedy search example If Sibiu is expanded we get: Open=[Fagaras:176 , RV:193 , Arad:366 , Or:380] Arad Sibiu Arad (366) Fagaras (176) Rimnicu Vilcea (193) Oradea (380) If Sibiu is expanded we get: Arad, Fagaras, Oradea and Rimnicu Vilcea Greedy best-first search will select: Fagaras CS 331/531 Dr M M Awais

Greedy search example Open=[Bucharest:0 ,… ] , goal achieved Arad Sibiu Fagaras Sibiu (253) Bucharest (0) If Fagaras is expanded we get: Sibiu and Bucharest Goal reached !! Yet not optimal (see Arad, Sibiu, Rimnicu Vilcea, Pitesti) CS 331/531 Dr M M Awais

Greedy search, evaluation Completeness: NO (DF-search) Check on repeated states Minimizing h(n) can result in false starts, e.g. Iasi to Fagaras. CS 331/531 Dr M M Awais

8-Puzzle f(N) = h(N) = number of misplaced tiles 3 4 3 4 5 3 3 4 2 4 4 2 1 Total nodes expanded 16 CS 331/531 Dr M M Awais

8-Puzzle f(N) = h(N) = S distances of tiles to goal Savings 25% 6 4 5 3 Savings 25% 2 5 4 2 1 Total nodes expanded 12 CS 331/531 Dr M M Awais

Robot Navigation CS 331/531 Dr M M Awais

Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal 2 1 5 8 7 3 4 6 CS 331/531 Dr M M Awais

Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal 8 7 6 5 4 3 2 3 4 5 6 7 5 4 3 5 6 3 2 1 1 2 4 7 7 6 5 8 7 6 5 4 3 2 3 4 5 6 Not optimal at all CS 331/531 Dr M M Awais

Greedy search, evaluation Completeness: NO (DF-search) Time complexity? Worst-case DF-search (with m is maximum depth of search space) Good heuristic can give dramatic improvement. CS 331/531 Dr M M Awais

Greedy search, evaluation Completeness: NO (DF-search) Time complexity: Space complexity: Keeps all nodes in memory CS 331/531 Dr M M Awais

Greedy search, evaluation Completeness: NO (DF-search) Time complexity: Space complexity: Optimality? NO Same as DF-search CS 331/531 Dr M M Awais

A* Alogorithm Problems with Best First Search It reduces the costs to the goal but It is not optimal nor complete Uniform cost CS 331/531 Dr M M Awais

ABDEF is it optimal / shortest pat. (NO) Path Cost Cost1 = 7 (2+3+2) Cost2 = 14 (2+5+3+4) Path for: Hill Climbing: ABDEF Best First: ABDEF A:10 2 2 B:8 C:9 5 3 D:6 G:3 3 2 E:4 F:0 4 F:0 ABDEF is it optimal / shortest pat. (NO) CS 331/531 Dr M M Awais

Path cost to node n + heuristic cost at n A* Search Evaluation function: f(n) = g(n) +h(n) Path cost to node n + heuristic cost at n Constraints: h(n) <= h*(n) (Admissible: Studied earlier) g(n) >= g*(n) (Coverage) CS 331/531 Dr M M Awais

Coverage: g(n) >= g*(n) Goal will never be reached CS 331/531 Dr M M Awais

Observations h g Remarks h* Immediate convergence, A* converges to goal (No Search) 0 0 Random Search 0 1 Breath - First Search >=h* No Convergence <=h* Admissible Search <=g* No Convergence CS 331/531 Dr M M Awais

Example of A* A:10 Path: (P1): Best First/Hill Climbing 2 2 2 2 B:8 C:9 5 3 D:6 G:3 3 E:4 2 4 F:0 Path: (P1): Best First/Hill Climbing ABDEF: Cost P1 = 14 (not optimal) For A* algorithm F(A)=0+10=10, F(B)=2+8=10, f(C) = 2+9=11, Expand B F(D)=(2+5)+6=13, f(C)=11, Expand C F(G)=(2+3)+3=8, f(D)=13, Expand G F(f)=(2+3+2)+0=7, GOAL achieved Path ACGF: Cost P2=7 (Optimal) Path Admissibility Cost P2 < Cost P1 hence P2 is admissible Path CS 331/531 Dr M M Awais

Explanation A:10 2 2 B:8 C:9 5 3 D:6 G:3 3 E:4 2 4 F:0 2 2 B:8 C:9 5 3 D:6 G:3 3 E:4 2 4 F:0 For A* algorithm Path (P2) Now lets start from A, should we move from A to B or A to C. Lets check the path cost SAME, So lets see the total cost (fb=hb+gb=8+2=10) (fc=hc+gb=9+2=11), hence moving through B is better Next move to D, total path cost to D is 2+5 = 7, and heuristic cost is 6, total is 7+6=13. On the other side If you move through C to G, then the path cost is 2+3=5, and heuristic cost is 3, total = 3+5=8, which is much better than moving through state D. So now we choose to change path and move through G CS 331/531 Dr M M Awais

Hence moving through G is much better will give the optimal path. Explanation A:10 2 2 B:8 C:9 5 3 D:6 G:3 3 E:4 2 4 F:0 Now from G we move to the Goal node F Total Path cost via G is 2+3+2=7 And Total Path cost via D is 2+5+3+4=14 Hence moving through G is much better will give the optimal path. CS 331/531 Dr M M Awais

For A*never throw away unexpanded nodes: Always compare paths through expanded and unexpanded nodes Avoid expanding paths that are already expensive CS 331/531 Dr M M Awais

A* search Best-known form of best-first search. Idea: avoid expanding paths that are already expensive. Evaluation function f(n)=g(n) + h(n) g(n) the cost (so far) to reach the node. h(n) estimated cost to get from the node to the goal. f(n) estimated total cost of path through n to goal. CS 331/531 Dr M M Awais

A* search A* search uses an admissible heuristic A heuristic is admissible if it never overestimates the cost to reach the goal Are optimistic Formally: 1. h(n) <= h*(n) where h*(n) is the true cost from n 2. h(n) >= 0 so h(G)=0 for any goal G. e.g. hSLD(n) never overestimates the actual road distance CS 331/531 Dr M M Awais

Romania example CS 331/531 Dr M M Awais

A* search example Starting at Arad f(Arad) = c(Arad,Arad)+h(Arad)=0+366=366 CS 331/531 Dr M M Awais

A* search example Expand Arad and determine f(n) for each node f(Sibiu)=c(Arad,Sibiu)+h(Sibiu)=140+253=393 f(Timisoara)=c(Arad,Timisoara)+h(Timisoara)=118+329=447 f(Zerind)=c(Arad,Zerind)+h(Zerind)=75+374=449 Best choice is Sibiu CS 331/531 Dr M M Awais

A* search example Expand Sibiu and determine f(n) for each node Previous Paths f(Timisoara)=c(Arad,Timisoara)+h(Timisoara)=118+329= 447 f(Zerind)=c(Arad,Zerind)+h(Zerind)=75+374= 449 Expand Sibiu and determine f(n) for each node f(Arad)=c(Sibiu,Arad)+h(Arad)=280+366= 646 f(Fagaras)=c(Sibiu,Fagaras)+h(Fagaras)=239+179= 415 f(Oradea)=c(Sibiu,Oradea)+h(Oradea)=291+380= 671 f(Rimnicu Vilcea)=c(Sibiu,Rimnicu Vilcea)+ h(Rimnicu Vilcea)=220+192= 413 Best choice is Rimnicu Vilcea CS 331/531 Dr M M Awais

A* search example f(Timisoara)=c(Arad,Timisoara)+h(Timisoara)=118+329= 447 f(Zerind)=c(Arad,Zerind)+h(Zerind)=75+374= 449 f(Arad)=c(Sibiu,Arad)+h(Arad)=280+366= 646 f(Fagaras)=c(Sibiu,Fagaras)+h(Fagaras)=239+179= 415 f(Oradea)=c(Sibiu,Oradea)+h(Oradea)=291+380= 671 Expand Rimnicu Vilcea and determine f(n) for each node f(Craiova)=c(Rimnicu Vilcea, Craiova)+h(Craiova)=360+160= 526 f(Pitesti)=c(Rimnicu Vilcea, Pitesti)+h(Pitesti)=317+100= 417 f(Sibiu)=c(Rimnicu Vilcea,Sibiu)+h(Sibiu)=300+253= 553 Best choice is Fagaras CS 331/531 Dr M M Awais

A* search example f(Timisoara)=c(Arad,Timisoara)+h(Timisoara)=118+329= 447 f(Zerind)=c(Arad,Zerind)+h(Zerind)=75+374= 449 f(Arad)=c(Sibiu,Arad)+h(Arad)=280+366= 646 f(Fagaras)=c(Sibiu,Fagaras)+h(Fagaras)=239+179= 415 f(Oradea)=c(Sibiu,Oradea)+h(Oradea)=291+380= 671 Expand Rimnicu Vilcea and determine f(n) for each node f(Craiova)=c(Rimnicu Vilcea, Craiova)+h(Craiova)=360+160= 526 f(Pitesti)=c(Rimnicu Vilcea, Pitesti)+h(Pitesti)=317+100= 417 f(Sibiu)=c(Rimnicu Vilcea,Sibiu)+h(Sibiu)=300+253= 553 Expand Fagaras and determine f(n) for each node f(Sibiu)=c(Fagaras, Sibiu)+h(Sibiu)=338+253= 591 f(Bucharest)=c(Fagaras,Bucharest)+h(Bucharest)=450+0= 450 Best choice is Pitesti !!! CS 331/531 Dr M M Awais

A* search example Note values along optimal path !! Expand Pitesti and determine f(n) for each node f(Bucharest)=c(Pitesti,Bucharest)+h(Bucharest)=418+0=418 Best choice is Bucharest !!! Optimal solution (only if h(n) is admissable) Note values along optimal path !! CS 331/531 Dr M M Awais

Optimality of A*(standard proof) Suppose suboptimal goal G2 in the queue. Let n be an unexpanded node on a shortest to optimal goal G. f(G2 ) = g(G2 ) since h(G2 )=0 > g(G) since G2 is suboptimal >= f(n) since h is admissible Since f(G2) > f(n), A* will never select G2 for expansion CS 331/531 Dr M M Awais

BUT … graph search Discards new paths to repeated state. Solution: Previous proof breaks down Solution: Add extra bookkeeping i.e. remove more expsive of two paths. Ensure that optimal path to any repeated state is always first followed. Extra requirement on h(n): consistency (monotonicity) CS 331/531 Dr M M Awais

Consistency A heuristic is consistent if If h is consistent, we have i.e. f(n) is non decreasing along any path. CS 331/531 Dr M M Awais

Optimality of A*(more usefull) A* expands nodes in order of increasing f value Contours can be drawn in state space Uniform-cost search adds circles. F-contours are gradually Added: 1) nodes with f(n)<C* 2) Some nodes on the goal Contour (f(n)=C*). Contour I has all Nodes with f=fi, where fi < fi+1. CS 331/531 Dr M M Awais

A* search, evaluation Completeness: YES Since bands of increasing f are added Unless there are infinitely many nodes with f<f(G) CS 331/531 Dr M M Awais

A* search, evaluation Completeness: YES Time complexity: Number of nodes expanded is still exponential in the length of the solution. CS 331/531 Dr M M Awais

A* search, evaluation Completeness: YES Time complexity: (exponential with path length) Space complexity: It keeps all generated nodes in memory Hence space is the major problem not time CS 331/531 Dr M M Awais

A* search, evaluation Completeness: YES Time complexity: (exponential with path length) Space complexity:(all nodes are stored) Optimality: YES Cannot expand fi+1 until fi is finished. A* expands all nodes with f(n)< C* A* expands some nodes with f(n)=C* A* expands no nodes with f(n)>C* Also optimally efficient (not including ties) CS 331/531 Dr M M Awais