1 Artificial Intelligence CS 165A Tuesday, October 16, 2007  Informed (heuristic) search methods (Ch 4) 1.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Informed search.
Review: Search problem formulation
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed search algorithms Chapter 4 Slides derived in part from converted to powerpoint by Min-Yen.
Heuristic Search Foundations of Artificial Intelligence.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Informed Search Methods
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Artificial Intelligence Informed Search Algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Presentation transcript:

1 Artificial Intelligence CS 165A Tuesday, October 16, 2007  Informed (heuristic) search methods (Ch 4) 1

2 Notes HW#1 –Discussion sessions Wednesday –Office hours What’s an example of a problem where you don’t need to know the path to a goal? (Finding the goal is enough.) –Does this starting position in the 8-puzzle have a solution? –Is this theorem provable? –Traveling salesman problem (“iterative improvement” approach) –Will this move lead me to checkmate, or be checkmated, within N moves?

3 Informed (“Heuristic”) Search If AI is “the attempt to solve NP-complete problems in polynomial time,” blind search doesn’t help Then we typically have two choices –Design algorithms that give good average-case behavior (even if the worst-case behavior is terrible) –Design algorithms that are approximate – give acceptable results (even if not optimal) in acceptable time/memory/etc. Informed search generally tries to do both of these

4 Informed Search (cont.) The path cost g(n) only measures the “past,” it tells nothing about the “future” –I.e., it measures cost from initial state to current state only We wish to minimize the overall cost, so we need an estimate of the “future” cost –I.e., from current state to goal state –Overall cost = past + future costs This estimate is a heuristic – a “rule of thumb” –A function that estimates the solution cost –A rule expressing informal belief

5 How to choose a heuristic? One method: Derive a heuristic from the exact solution cost of a relaxed (less restricted) version of the problem A standard distance metric –E.g., Euclidian distance, Hamming distance, … Use common sense? A good heuristic function must be efficient as well as accurate

6 Heuristics What’s a heuristic for –Driving distance (or time) from city A to city B ? –8-puzzle problem ? –M&C ? –Time to complete homework assignment ? –Robot navigation ? –Reaching the summit ? –Medical diagnosis ? Admissible heuristic –Does not overestimate the cost to reach the goal –“Optimistic” Are the above heuristics admissible?

7 Informed Search Methods Best-first search –Greedy best-first search –A* search Memory bounded search –IDA* –SMA* Iterated improvement algorithms –Hill-climbing –Simulated annealing As with blind search, the strategy is defined by choosing the order of node expansion Not covering in detail

8 Best-First Search Defines a family of search algorithms Uses an evaluation function at each node to estimate the desirability of expanding the node A queuing function sorts the unexpanded nodes in decreasing order of desirability, according to the evaluation function –Therefore, the one with the “highest desirability” is expanded first –We’ve already seen this with Uniform Cost Search  Expand the node n with the lowest g(n)

9 Best-First Search What’s the difference between Q UEUING -F N and E VAL -F N ? –E VAL -F N takes one argument (the node)  Returns a scalar value (e.g., approximate distance to goal) –Q UEUING -F N t akes two arguments: a queue of nodes and a function (e.g., less-than)  Returns a queue of nodes function B EST -F IRST -S EARCH (problem, E VAL -F N ) returns a solution or failure Q UEUING -F N  a function that orders nodes by E VAL -F N return G ENERAL -S EARCH (problem, Q UEUING -F N ) function B EST -F IRST -S EARCH (problem, E VAL -F N ) returns a solution or failure Q UEUING -F N  a function that orders nodes by E VAL -F N return G ENERAL -S EARCH (problem, Q UEUING -F N ) But what evaluation function to use?

10 Greedy Best-First Search Uses a heuristic function, h(n), as the E VAL -F N h(n) estimates of the cost of the best path from state n to a goal state –h(goal) = 0 Greedy search – always expand the node that appears to be closest to the goal (i.e., with the smallest h) –Instant gratification, hence “greedy” function G REEDY -S EARCH (problem, h) returns a solution or failure return B EST -F IRST -S EARCH (problem, h) function G REEDY -S EARCH (problem, h) returns a solution or failure return B EST -F IRST -S EARCH (problem, h) Greedy search often performs well –It doesn’t always find the best solution –It may get stuck –It depends on the particular h function

11 GBFS Example Use h SLD (n) – straight-line distance to goal (admissible?)

12 Sibiu Bucharest h = 253h = 0 OradeaAradFagarasRimnicu Vilcea h = 380h = 366h = 178h = 193 ZerindSibiuTimisoara h = 374h = 253h = 329 h = 366 Arad d = = 450 km Is this the optimal solution? No: Arad  Sibiu  Rimnicu Vilcea  Pitesti  Bucharest 418 km

13 Don’t be greedy? Greedy methods get stuck in local minima (maxima) Robot GOAL

14 Greedy Best-First Search Optimal? Complete? Time complexity? Space complexity? No Exponential: O( b m ) (worst case) Exponential: O( b m ) – keeps all nodes in memory A good heuristic function reduces the (practical) complexity substantially!

15 “A” Search Uniform-cost search minimizes g(n) (“past” cost) Greedy search minimizes h(n) (“expected” or “future” cost) “A Search” combines the two: –Minimize f (n) = g(n) + h(n) –Accounts for the “past” and the “future” –Estimates the cheapest solution (complete path) through node n

16 A* Search “A* Search” is A Search with an admissible h –h is optimistic – it never overestimates the cost to the goal  h(n)  true cost to reach the goal  h is a “Pollyanna” –So f (n) never overestimates the actual cost of the best solution passing through node n function A*-S EARCH (problem, h) returns a solution or failure return B EST -F IRST -S EARCH (problem, f ) function A*-S EARCH (problem, h) returns a solution or failure return B EST -F IRST -S EARCH (problem, f )

17 A* Example f(n) = g(n) + h(n)

18 OradeaAradFagarasRimnicu Vilcea = = = =413 ZerindSibiuTimisoara f = = = =447 f = = 366 Arad A* Example

19 A* Search Optimal? Complete? Time complexity? Space complexity? Yes Exponential; better under some conditions Exponential; keeps all nodes in memory Good news: A* is optimally efficient for any particular h(n) That is, no other optimal algorithm is guaranteed to expand fewer nodes

20 A* Search What if g(n)  0 ? –Greedy best-first search What if h(n)  0 ? –Uniform cost search What if h(n)  0 and g(n)  depth(n) ? –Breadth first search How would you make depth-first search? g(n)  – depth(n)

21 Memory Bounded Search Memory, not computation, is usually the limiting factor in search problems –Certainly true for A* search Why? What takes up memory in A* search? IDA* and SMA* are designed to conserve memory

22 Iterative Deepening A* (IDA*) IDA* is an optimal, memory-bounded, heuristic search algorithm –Requires space proportional to the longest path that it explores –Space estimate: O(bd) Like Iterative Deepening Search –Uses f-cost limit rather than depth-limit –In IDS, depth-limit is incremented after each round –In IDA*, f-cost limit is updated after each round

23 Simplified Memory-Bounded A* (SMA*) IDA* only keeps around the current f-cost limit –Can check the current path for repeated states, but future paths may repeat states already expanded SMA* uses more memory to keep track of repeated states –Up to the limit of allocated memory –Nodes with high f-cost are dropped from the queue when memory is filled (“forgotten nodes”) Optimality and completeness depends on how much memory is available with respect to the optimal solution –Produces the best solution that can be reached given the available memory

24 The cost of being informed Typical performance of informed search methods is much better than uninformed methods –Assuming reasonable heuristics exist However, there is a tradeoff involved –Evaluating the desirability of a node (h) can be a non-trivial problem –E.g., theorem proving  How much closer to the theorem are we if we apply this rule? –Cost of evaluation (heuristic) can be high  It can be a difficult search problem itself

25 Cost tradeoff Cost “Informedness” Cost of expanding nodes (rule application) Cost of evaluating nodes (control strategy) Overall cost There may be different optima for computation and memory

26 Iterative Improvement Algorithms An iterative improvement algorithm starts with a (possibly random) proposed solution, and then makes modifications to improve its quality –Each state is a (proposed) solution –Usually keeps information on current state only Generally for problems in which non-optimal solutions are known, or easily generated –Task: Find the solution that best satisfies the goal test –State space for an IIA = set of all (proposed) solutions –Examples: VLSI layout, TSP, n-queens –Not appropriate for all problems! S0S0 S1S1 S2S2 SnSn

27 Example n-Queens problem: Put n queens on an n x n chess board with no two queens on the same row, column, or diagonal Start -5 1 iteration -3 Goal 0

28 Example Traveling Salesman Problem (TSP) –Start with any path through the cities –Change two links at a time

29 Search vs. iterative improvement Search Iterative improvement

30 Iterative Improvement Algorithms Two classes of iterative improvement algorithms –Hill-climbing –Simulated annealing Analogy: –You are placed at a random point in some unfamiliar terrain (the solution space) and told to reach the highest peak. It is dark, and you have only a very weak flashlight (no map, compass, etc.). –What should you do?

31 Hill Climbing, a.k.a. Gradient Descent Strategy: Move in the direction of increasing value (decreasing cost) –Assumes a reasonable evaluation method!!!  n-queens? TSP? VLSI layout?

32 Hill climbing example “Solution space” Measure of value

33 Hill climbing issues “Solution space” Strategy: –Climb until goal or stuck –If stuck, restart in random location

34 Hill climbing issues Does not maintain a search tree –Evaluates the successor states, and keeps only the best one –Greedy strategy Drawbacks –Local maxima –Plateaus and ridges Can randomize (re-)starting locations and local strategies when stuck –“Random restart hill-climbing” –But how to know when you’re stuck?

35 Simulated Annealing Similar to hill-climbing –But includes a random element –Sometimes take “bad” steps to escape local maxima –Motivated by the roughly analogous physical process of annealing: heating and then slowly cooling a substance to obtain a strong crystalline structure Analogy with physical annealing: –T is temperature, E is energy  A schedule determines the rate at which T is lowered –Value(state) measures the state “goodness” –  E measures increase in “goodness” resulting from new state