Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

1 Search Problems (read Chapters 3 and 4 of Russell and Norvig) Many (perhaps most) AI problems can be considered search problems. This can be modeled.
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
Problem Solving by Searching
Local search algorithms
Local search algorithms
Two types of search problems
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Oregon State University – CS430 Intro to AI (c) 2003 Thomas G. Dietterich and Devika Subramanian1 Search-Based Agents  Appropriate in Static Environments.
Introduction to Artificial Intelligence Problem Solving Ruth Bergman Fall 2002.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Informed search algorithms Chapter 4 Slides derived in part from converted to powerpoint by Min-Yen.
Heuristic Search Foundations of Artificial Intelligence.
Introduction to Artificial Intelligence (G51IAI)
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
Artificial intelligence 1: informed search. 2 Outline Informed = use problem-specific knowledge Which search strategies?  Best-first search and its variants.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Informed Search. S B AD E C F G straight-line distances h(S-G)=10 h(A-G)=7 h(D-G)=1 h(F-G)=1 h(B-G)=10 h(E-G)=8 h(C-G)=20 The graph above.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
Local Search Algorithms
Introduction to Artificial Intelligence
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Informed search algorithms
Artificial Intelligence
Presentation transcript:

Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002

Uninformed search (= blind search) –have no information about the number of steps or the path cost from the current state to the goal Informed search (= heuristic search) –have some domain-specific information –we can use this information to speed-up search –e.g. Bucharest is southeast of Arad. –e.g. the number of tiles that are out of place in an 8-puzzle position –e.g. for missionaries and cannibals problem, select moves that move people across the river quickly Search Strategies

Heuristic Search Let us suppose that we have one piece of information: a heuristic function h(n) = 0, n a goal node h(n) > 0, n not a goal node we can think of h(n) as a “guess” as to how far n is from the goal Best-First-Search(state,h) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) return failure

Heuristics: Example Travel: h(n) = distance(n, goal) Oradea Zerind Arad Sibiu Fagaras Rimnicu Vilcea Pitesti Timisoara Lugoj Mehadia Dobreta Craiova Neamt Iasi Vaslui Urziceni Bucharest Giurgiu Hirsova Eforie

Heuristics: Example 8-puzzle: h(n) = tiles out of place h(n) =

Example - cont h(n) = h(n) = 2h(n) = 4

h(n) = h(n) = 2h(n) = h(n) = 1 h(n) = 3

h(n) = 2h(n) = h(n) = h(n) = 2 h(n) = 0 h(n) = 2 h(n) = 3

Best-First-Search Performance Completeness –Complete if either finite depth, or minimum drop in h value for each operator Time complexity –Depends on how good the heuristic function is –A “perfect” heuristic function will lead search directly to the goal –We rarely have “perfect” heuristic function Space Complexity –Maintains fringe of search in memory –High storage requirement Optimality –Non-optimal solutions x 1 1 1x Suppose the heuristic drops to one everywhere except along the path along which the solution lies

Iterative Improvement Algorithms Start with the complete configuration and to make modifications to improve the quality Consider the states laid out on the surface of a landscape Keep track of only the current state => Simplification of Best-First-Search Do not look ahead beyond the immediate neighbors of that state –Ex: amnesiac climb to summit in a thick fog

Iterative Improvement Basic Principle

Hill-Climbing Simple loop that continually moves in the direction of increasing value does not maintain a search tree, so the node data structure need only record the state and its evaluation. Always try to make changes that improve the current state Steepest-ascent: pick the steepest next state “Like climbing Everest in thick fog with amnesia” Hill-Climbing(state,h) Current = state do forever next = maximum valued successor of current if (value(next) < value(current) return current current = next

Drawbacks Local maxima : halt with local maxima Plateaux : random walk Ridges : oscillate from side to side, limit progress Random-Restart Hill-Climbing Conducts a series of hill-climbing searches from randomly generated initial states.

Hill-Climbing Performance Completeness –Not complete, does not use systematic search method Time complexity –Depends on heuristic function Space Complexity –Very low storage requirement Optimality –Non-optimal solutions –Often results in locally optimal solution

Simulated-Annealing take some upnhill steps to escape the local minimum Instead of picking the best move, it picks a random move If the move improves the situation, it is executed. Otherwise, move with some probability less than 1. Physical analogy with the annealing process: –Allowing liquid to gradually cool until it freezes The heuristic value is the energy, E Temperature parameter, T, controls speed of convergence.

Simulated-Annealing Algorithm Simulated-Annealing(state,schedule) Current = state For t=1,2,… T = schedule(t) If T=0 return current next = a randomly selected successor of current  E = value(next)-value(current) if (  E>0) current = next else current = next with probability

Simulated Annealing T= 100 – t*5 Probability > 0.9 Solution Quality The schedule determines the rate at which the temperature is lowered If the schedule lowers T slowly enough, the algorithm will find a global optimum High temperature T is characterized by a large portion of accepted uphill moves whereas at low temperature only downhill moves are accepted => If a suitable annealing schedule is chosen, simulated annealing has been found capable of finding a good solution, though this is not guaranteed to be the absolute minimum.

Beam Search Overcomes storage complexity of Best-First-Search Maintains the k best nodes in the fringe of the search tree (sorted by the heuristic function) When k = 1, Beam search is equivalent to Hill- Climbing When k is infinite, Beam search is equivalent to Best- First-Search If you add a check to avoid repeated states, memory requirement remains high Incomplete, search may delete the path to the solution.

Beam Search Algorithm Beam-Search(state,h,k) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) If size(nodes) > k delete last item in nodes return failure

Search Performance Heuristic 1: Tiles out of place Heuristic 1: Manhattan distance* *Manhattan distance =.total number of horizontal and vertical moves required to move all tiles to their position in the goal state from their current position. => Choice of heuristic is critical to heuristic search algorithm performance. h1 = 7 h2 = = Square