Informed Search II.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed search algorithms
Solving Problem by Searching
Problem Solving by Searching
Review: Search problem formulation
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed search algorithms
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
CS 312: Algorithm Design & Analysis Lecture #37: A* (cont.); Admissible Heuristics Credit: adapted from slides by Stuart Russell of UC Berkeley. This work.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Informed search algorithms
HW 1: Warmup Missionaries and Cannibals
HW 1: Warmup Missionaries and Cannibals
Artificial Intelligence
Solving Problems by Searching
Informed Search.
Presentation transcript:

Informed Search II

Outline for today’s lecture Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing CIS 521 - Intro to AI - Fall 2017

Review: Greedy best-first search: f(n): estimated total cost of path through n to goal g(n): the cost to get to n, UCS: f(n)= g(n) h(n): heuristic function that estimates the remaining distance from n to goal Greedy best-first search Idea: f(n) = h(n) Expand the node that is estimated by h(n) to be closest to goal Here, h(n) = hSLD(n) = straight-line distance from ` to Bucharest CIS 521 - Intro to AI - Fall 2017

Review: Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  … Time? O(bm) – worst case (like Depth First Search) But a good heuristic can give dramatic improvement of average cost Space? O(bm) – priority queue, so worst case: keeps all (unexpanded) nodes in memory Optimal? No CIS 521 - Intro to AI - Fall 2017

Review: Optimal informed search: A* Best-known informed search method Key Idea: avoid expanding paths that are already expensive, but expand most promising first. Simple idea: f(n)=g(n) + h(n) Implementation: Same as before Frontier queue as priority queue by increasing f(n) CIS 521 - Intro to AI - Fall 2017

Review: Admissible heuristics A heuristic h(n) is admissible if it never overestimates the cost to reach the goal; i.e. it is optimistic Formally:n, n a node: h(n)  h*(n) where h*(n) is the true cost from n h(n)  0 so h(G)=0 for any goal G. Example: hSLD(n) never overestimates the actual road distance Theorem: If h(n) is admissible, A* using Tree Search is optimal CIS 521 - Intro to AI - Fall 2017

A* search example Frontier queue: Arad 366 CIS 521 - Intro to AI - Fall 2017

A* search example Frontier queue: Sibiu 393 Timisoara 447 Zerind 449 We add the three nodes we found to the Frontier queue. We sort them according to the g()+h() calculation. CIS 521 - Intro to AI - Fall 2017

A* search example Frontier queue: Rimricu Vicea 413 Fagaras 415 Timisoara 447 Zerind 449 Arad 646 Oradea 671 When we expand Sibiu, we run into Arad again. Note that we’ve already expanded this node once; but we still add it to the Frontier queue again. CIS 521 - Intro to AI - Fall 2017

A* search example We expand Rimricu Vicea. Frontier queue: Fagaras 415 Pitesti 417 Timisoara 447 Zerind 449 Craiova 526 Sibiu 553 Arad 646 Oradea 671 We expand Rimricu Vicea. CIS 521 - Intro to AI - Fall 2017

A* search example Frontier queue: Pitesti 417 Timisoara 447 Zerind 449 Bucharest 450 Craiova 526 Sibiu 553 Sibiu 591 Arad 646 Oradea 671 When we expand Fagaras, we find Bucharest, but we’re not done. The algorithm doesn’t end until we “expand” the goal node – it has to be at the top of the Frontier queue. CIS 521 - Intro to AI - Fall 2017

A* search example Frontier queue: Bucharest 418 Timisoara 447 Zerind 449 Bucharest 450 Craiova 526 Sibiu 553 Sibiu 591 Rimricu Vicea 607 Craiova 615 Arad 646 Oradea 671 Frontier queue: Note that we just found a better value for Bucharest! Now we expand this better value for Bucharest since it’s at the top of the queue. We’re done and we know the value found is optimal! CIS 521 - Intro to AI - Fall 2017

Optimality of A* (intuitive) Lemma: A* expands nodes on frontier in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1 (After all, A* is just a variant of uniform-cost search….) CIS 521 - Intro to AI - Fall 2017

Optimality of A* using Tree-Search (proof idea ) Lemma: A* expands nodes on frontier in order of increasing f value Suppose some suboptimal goal G2 (i.e a goal on a suboptimal path) has been generated and is in the frontier along with an optimal goal G. Must prove: f(G2) > f(G) (Why? Because if f(G2) > f(n), then G2 will never get to the front of the priority queue.) Proof: g(G2) > g(G) since G2 is suboptimal f(G2) = g(G2) since f(G2)=g(G2)+h(G2) & h(G2) = 0, since G2 is a goal f(G) = g(G) similarly f(G2) > f(G) from 1,2,3 Also must show that G is added to the frontier before G2 is expanded – see AIMA for argument in the case of Graph Search CIS 521 - Intro to AI - Fall 2017

A* search, evaluation Completeness: YES Since bands of increasing f are added As long as b is finite (guaranteeing that there aren’t infinitely many nodes n with f (n) < f(G) ) Time complexity: Same as UCS worst case Number of nodes expanded is still exponential in the length of the solution. Space complexity: Same as UCS worst case It keeps all generated nodes in memory so exponential Hence space is the major problem not time Optimality: YES Cannot expand fi+1 until fi is finished. A* expands all nodes with f(n)< f(G) A* expands one node with f(n)=f(G) A* expands no nodes with f(n)>f(G) CIS 521 - Intro to AI - Fall 2017

Consistency See book for details A heuristic is consistent if Consistency enforces that h(n) is optimistic Theorem: if h(n) is consistent, A* using Graph-Search is optimal See book for details Cost of getting from n to n’ by any action a CIS 521 - Intro to AI - Fall 2017

Outline for today’s lecture Informed Search Optimal informed search: A* Creating good heuristic functions (AIMA 3.6) Hill Climbing CIS 521 - Intro to AI - Fall 2017

Heuristic functions Avg. solution cost is about 22 steps For the 8-puzzle Avg. solution cost is about 22 steps (branching factor ≤ 3) Exhaustive search to depth 22: 3.1 x 1010 states A good heuristic function can reduce the search process CIS 521 - Intro to AI - Fall 2017

Example Admissible heuristics For the 8-puzzle: hoop(n) = number of out of place tiles hmd(n) = total Manhattan distance (i.e., # of moves from desired location of each tile) hoop(S) = 8 hmd(S) = 3+1+2+2+2+3+3+2 = 18 CIS 521 - Intro to AI - Fall 2017

Relaxed problems A problem with fewer restrictions on the actions than the original is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then hoop(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then hmd(n) gives the shortest solution CIS 521 - Intro to AI - Fall 2017

Defining Heuristics: h(n) Cost of an exact solution to a relaxed problem (fewer restrictions on operator) Constraints on Full Problem: A tile can move from square A to square B if A is adjacent to B and B is blank. Constraints on relaxed problems: A tile can move from square A to square B if A is adjacent to B. (hmd) A tile can move from square A to square B if B is blank. A tile can move from square A to square B. (hoop) CIS 521 - Intro to AI - Fall 2017

Dominance: A metric on better heuristics If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 So h2 is optimistic, but more accurate than h1 h2 is therefore better for search Notice: hmd dominates hoop Typical search costs (average number of nodes expanded): d=12 Iterative Deepening Search = 3,644,035 nodes A*(hoop) = 227 nodes A*(hmd) = 73 nodes d=24 IDS = too many nodes A*(hoop) = 39,135 nodes A*(hmd) = 1,641 nodes CIS 521 - Intro to AI - Fall 2017

The best and worst admissible heuristics h*(n) - the (unachievable) Oracle heuristic h*(n) = the true distance from the root to n hwe’re here already(n)= hteleportation(n)=0 Admissible: both yes!!! h*(n) dominates all other heuristics hteleportation(n) is dominated by all heuristics CIS 521 - Intro to AI - Fall 2017

Iterative Deepening A* and beyond Beyond our scope: Iterative Deepening A* Recursive best first search (incorporates A* idea, despite name) Memory Bounded A* Simplified Memory Bounded A* - R&N say the best algorithm to use in practice, but not described here at all. (If interested, follow reference to Russell article on Wikipedia article for SMA*) (see 3.5.3 if you’re interested in these topics) CIS 521 - Intro to AI - Fall 2017

Outline for today’s lecture Informed Search Optimal informed search: A* Creating good heuristic functions When informed search doesn’t work: hill climbing (AIMA 4.1.1) A few slides adapted from CS 471, UBMC and Eric Eaton (in turn, adapted from slides by Charles R. Dyer, University of Wisconsin-Madison)...... CIS 521 - Intro to AI - Fall 2017

Local search and optimization Use single current state and move to neighboring states. Idea: start with an initial guess at a solution and incrementally improve it until it is one Advantages: Use very little memory Find often reasonable solutions in large or infinite state spaces. Useful for pure optimization problems. Find or approximate best state according to some objective function Optimal if the space to be searched is convex CIS 521 - Intro to AI - Fall 2017

Hill climbing on a surface of states h(s): Estimate of distance from a peak (smaller is better) OR: Height Defined by Evaluation Function (greater is better) CIS 521 - Intro to AI - Fall 2017

"Like climbing Everest in thick fog with amnesia" Hill-climbing search While ( uphill points): Move in the direction of increasing evaluation function f Let snext = , s a successor state to the current state n If f(n) < f(s) then move to s Otherwise halt at n Properties: Terminates when a peak is reached. Does not look ahead of the immediate neighbors of the current state. Chooses randomly among the set of best successors, if there is more than one. Doesn’t backtrack, since it doesn’t remember where it’s been a.k.a. greedy local search "Like climbing Everest in thick fog with amnesia" CIS 521 - Intro to AI - Fall 2017

Hill climbing example I (minimizing h) 3 2 4 5 8 6 7 1 2 3 4 5 6 7 8 1 start hoop = 5 goal hoop = 4 hoop = 0 6 5 3 2 4 5 8 6 7 1 2 4 5 6 7 8 1 3 hoop = 1 hoop = 3 5 4 3 2 4 5 6 7 8 1 3 2 4 5 6 7 8 1 hoop = 2 4 CIS 521 - Intro to AI - Fall 2017

Hill-climbing Example: n-queens n-queens problem: Put n queens on an n × n board with no two queens on the same row, column, or diagonal Good heuristic: h = number of pairs of queens that are attacking each other h=5 h=3 h=1 (for illustration) CIS 521 - Intro to AI - Fall 2017

Hill-climbing example: 8-queens A state with h=17 and the h-value for each possible successor A local minimum of h in the 8-queens state space (h=1). h = number of pairs of queens that are attacking each other CIS 521 - Intro to AI - Fall 2017

Search Space features CIS 521 - Intro to AI - Fall 2017

Drawbacks of hill climbing Local Maxima: peaks that aren’t the highest point in the space Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk) Ridges: dropoffs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up. CIS 521 - Intro to AI - Fall 2017

Toy Example of a local "maximum" 4 2 3 1 5 6 7 8 2 start goal 4 1 2 3 5 6 7 8 4 1 2 3 7 5 6 8 1 2 3 4 5 6 7 8 2 1 4 1 2 3 5 6 7 8 2 CIS 521 - Intro to AI - Fall 2017

The Shape of an Easy Problem (Convex) CIS 521 - Intro to AI - Fall 2017

Gradient ascent/descent Images from http://en.wikipedia.org/wiki/Gradient_descent •  Gradient descent procedure for finding the argx min f(x) –  choose initial x0 randomly repeat •  xi+1 ← xi – η f '(xi) –  until the sequence x0, x1, …, xi, xi+1 converges •  Step size η (eta) is small (perhaps 0.1 or 0.05) CIS 521 - Intro to AI - Fall 2017

Gradient methods vs. Newton’s method •  A reminder of Newton's method from Calculus: xi+1 ← xi – η f '(xi) / f ''(xi) •  Newton,s method uses information (the second derivative, or, curvature) to take a more direct route to the minimum. 2nd order Contour lines of a function Gradient descent (green) Newton,s method (red) Image from http://en.wikipedia.org/wiki/Newton's_method_in_optimization •  The second-order information is more expensive to compute, but converges quicker. (this and previous slide from Eric Eaton) CIS 521 - Intro to AI - Fall 2017

The Shape of a Harder Problem CIS 521 - Intro to AI - Fall 2017

The Shape of a Yet Harder Problem CIS 521 - Intro to AI - Fall 2017

One Remedy to Drawbacks of Hill Climbing: Random Restart In the end: Some problem spaces are great for hill climbing and others are terrible. CIS 521 - Intro to AI - Fall 2017

Fantana muzicala Arad - Musical fountain Arad, Romania