Lecture 9 Administration Heuristic search, continued

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Informed search algorithms
Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Search in AI.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Informed Search CSE 473 University of Washington.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Informed Search Methods
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
CS-424 Gregory Dudek Lecture 10 Annealing (final comments) Adversary Search Genetic Algorithms (genetic search)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches Informed searches easy
Informed Search Methods
Optimization via Search
Announcements Homework 1 Full assignment posted..
Department of Computer Science
Last time: search strategies
Last time: Problem-Solving
CS 460 Spring 2011 Lecture 4.
Artificial Intelligence (CS 370D)
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Local Search Algorithms
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Heuristic search INT 404.
Artificial Intelligence Informed Search Algorithms
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Lecture 8 Heuristic search Motivational video Assignment 2
Informed search algorithms
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
More on Search: A* and Optimization
Heuristics Local Search
Artificial Intelligence
More advanced aspects of search
First Exam 18/10/2010.
Local Search Algorithms
Search.
CS 416 Artificial Intelligence
Midterm Review.
Local Search Algorithms
Presentation transcript:

Lecture 9 Administration Heuristic search, continued Local search methods Stochastic search CS-424 Gregory Dudek

Admin issues Assignment 2 Mid term: October 7th Web searching See home page or <http://www.cim.mcgill.ca/~dudek/424/424-99-asst2/asst2.html> Mid term: October 7th in class 1 1/2 hours everything up to today’s lecture CS-424 Gregory Dudek

Prolog solution CS-424 Gregory Dudek

A* revisited Reminder: with A* we want to find the best-cost (C ) path to the goal first. To do this, all we have to do is make sure our cost estimates are less than the actual costs. Since our desired path has the lowest actual cost, no other path can be fully expanded first, since at some point it’s estimated cost will have to be higher that the cost C. CS-424 Gregory Dudek

CS-424 Gregory Dudek

Heuristic Functions: Example CS-424 Gregory Dudek

8-puzzle hC = number of missplaced tiles hM = Manhattan distance Admissible? Which one should we use? CS-424 Gregory Dudek

Choosing a good heuristic hC <= hM <= hopt Prefer hM Note: Expand all nodes with e(n) = g(n) + h(n) < e* So, g(n) < e* - h(n), higher h means fewer n's. When one h funtion is larger than another, we say it is more informed. Aside: How would we get an hopt ?? ? CS-424 Gregory Dudek

Inventing Heuristics Automatically A tile can move from sq A to sq B if A is adjacent to B and B is blank. (a) A tile can move from sq A to sq B if A is adjacent to B. (b) A tile can move from sq A to sq B if B is blank. (c) A tile can move from sq A to sq B. If all admissible, combine them by taking the max. CS-424 Gregory Dudek

IDA* Problem with A*: like BFS, uses too much memory. Can combine with iterative deepening to limit depth of search. Only add nodes to search list L is they are within a limit on the e cost. iterative deepening (ID) + A* = IDA* Further improvement: Simplified Memory-Bounded A* : SMA* CS-424 Gregory Dudek

A Different Approach So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.). The current best such algorithms (IDA* / SMA*) can handle search spaces of up to 10100 states or around 500 binary valued variables. (These are ``ballpark '' figures only!) CS-424 Gregory Dudek

And if that’s not enough? What if we have 10,000 or 100,000 variables / search spaces of up to 1030,000 states? A completely different kind of method is called for: Local Search Methods or Iterative Improvement Methods CS-424 Gregory Dudek

When? Applicable when we're interested in the Goal State not in how to get there. E.g. N-Queens, VLSI layout, or map coloring. CS-424 Gregory Dudek

Iterative Improvement Simplest case: Start with some (initial) state Evaluate it’s quality Apply a perturbation ∂ to move to an adjacent state Evaluate the new state Accept the new state sometimes, based on decision D(∂) ∂ can be random D(.) can be always. Or…. CS-424 Gregory Dudek

Can we be smarter? Can we do something smarter than A) ….. A) moving randomly B) accepting every change? A) ….. move in the “right” direction B) accept changes that get us towards a better solution CS-424 Gregory Dudek

an iterative improvement algorithm. Hill-climbing search Often used when we have a solution/path/setting and we want to improve it, and we have a suitable space: an iterative improvement algorithm. Always move to a successor that that increases the desirability (can minimize or maximize cost). Expand children of current node. Sort them by expected distance to goal (closest first). Put them all at the front of L. Depth first flavor. CS-424 Gregory Dudek

Hill-climbing problems Hill climbing corresponds to function optimization by simply moving uphill along the gradient. Why isn’t this a reliable way of getting to a global maximum? 1. Local maxima 2. Plateaus 3. Ridges E.g. Trajectory planning. Scrabble. CS-424 Gregory Dudek

Gradient ascent In continuous spaces, there are better ways to perform hill climbing. Acceleration methods Try to avoid sliding along a ridge Conjugate gradient methods Take steps that do not counteract one another. CS-424 Gregory Dudek

Stochastic search Simply moving uphill (or downhill) isn’t good enough. Sometime, have to go against the obvious trend. Idea: Randomly make a “non-intuitive” move. Key question: How often? One answer (simple intuition): Act more randomly when you are lost, less randomly when you’re not. CS-424 Gregory Dudek

Simulated Annealing Initially act more randomly. Specifics: As time passes, we assume we are doing better and act less randomly. Analogy: associate each state with an energy or “goodness”. Specifics: Pick a random successor s2 to the current state s1. Compute ∂E = energy(s2 ) - energy(s1 ). If ∂E good (positive) go to the new state. If ∂E bad (negative), still take it sometime¨, with probability e( ∂E/T ) Annealing schedule specifies how to change T over time. CS-424 Gregory Dudek

Plastic transparencies & blackboard. GA’s Genetic Algorithms Plastic transparencies & blackboard. CS-424 Gregory Dudek

Adversary Search Basic formalism 2 players. Each has to beat the other (competing objective functions). Complete information. Alternating moves for the 2 players. CS-424 Gregory Dudek

Minimax Divide tree into plies Propagate information up from terminal nodes. Storage needs grow exponentially with depth. CS-424 Gregory Dudek