Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Traveling Salesperson Problem
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
Solving Problem by Searching
CSC 423 ARTIFICIAL INTELLIGENCE
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed Search Idea: be smart about what paths to try.
Informed search algorithms
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Search (cont) & CSP Intro Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore,
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Searching for Solutions
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
The A* Algorithm Héctor Muñoz-Avila.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Midterm Review.
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Solving Problems by Searching
Informed Search.
Presentation transcript:

Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th

2 Pending Questions The state space for the word problem: Isn’t it all permutations of the letters, not the power set? Yes

3 Agenda Introduction of heuristic search Greedy search Examples: 8 puzzle, word puzzle A* search –Algorithm, admissable heuristics, optimality Heuristics for other problems Beginning of online search if time

4 Heuristics Suppose 8-puzzle off by one Is there a way to choose the best move next? Good news: Yes! We can use domain knowledge or heuristic to choose the best move

6 Nature of heuristics Domain knowledge: some knowledge about the game, the problem to choose Heuristic: a guess about which is best, not exact Heuristic function, h(n): estimate the distance from current node to goal

7 Heuristic for the 8-puzzle # tiles out of place (h 1 ) Manhattan distance (h 2 ) Sum of the distance of each tile from its goal position Tiles can only move up or down  city blocks

Goal State h 1 =1 h 2 =1 h 1 =5 h 2 = =7

9 Best first searches A class of search functions Choose the “best” node to expand next Use an evaluation function for each node Estimate of desirability Implementation: sort fringe, open in order of desirability Today: greedy search, A* search

10 Greedy search Evaluation function = heuristic function Expand the node that appears to be closest to the goal

11 Greedy Search OPEN = start node; CLOSED = empty While OPEN is not empty do Remove leftmost state from OPEN, call it X If X = goal state, return success Put X on CLOSED SUCCESSORS = Successor function (X) Remove any successors on OPEN or CLOSED Compute f=heuristic function for each node Put remaining successors on either end of OPEN Sort nodes on OPEN by value of heuristic function End while

13 Fill-In Station

14 Language Models Knowledge about what letters in words are most frequent Based on a sample of language What is the probability that A is the first letter Having seen A, what do we predict is the next letter? Tune our language model to a situation 3 letter words only Medical vocabulary Newspaper text

15 Formulation Left-right, top-down as path Choose a letter at a time Heuristic: Based on language model Cost of choosing letter A first = 1 – probability of A next Subtract subsequent probabilities Successor function: Choose next best letter, if 3 rd in a row, is it a word? Goal test?

16 Solution to 2 nd puzzle APE ILK LYE

17 How does heuristic help us get there?

18

19 Analysis of Greedy Search Like depth-first Not Optimal Complete if space is finite

20 A* Search Try to expand node that is on least cost path to goal Evaluation function = f(n) f(n)=g(n)+h(n) h(n) is heuristic function: cost from node to goal g(n) is cost from initial state to node f(n) is the estimated cost of cheapest solution that passes through n If h(n) is an underestimate of true cost to goal A* is complete A* is optimal A* is optimally efficient: no other algorithm using h(n) is guaranteed to expand fewer states

21 A* Search OPEN = start node; CLOSED = empty While OPEN is not empty do Remove leftmost state from OPEN, call it X If X = goal state, return success Put X on CLOSED SUCCESSORS = Successor function (X) Remove any successors on OPEN or CLOSED Compute f(n)= g(n) + h(n) Put remaining successors on either end of OPEN Sort nodes on OPEN by value of heuristic function End while

23

24

25

26 Admissable heuristics A heuristic that never overestimates the cost to the goal h 1 and h 2 for the 8 puzzle are admissable heuristics Consistency: the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus estimated cost to goal from n’ h(n) <=c(n,a,n’)+h(n’)

27 Which heuristic is better? Better means that fewer nodes will be expanded in searches h 2 dominates h 1 if h 2 >= h 1 for every node n Intuitively, if both are underestimates, then h 2 is more accurate Using a more informed heuristic is guaranteed to expand fewer nodes of the search space Which heuristic is better for the 8-puzzle?

28

29

30 Relaxed Problems Admissable heuristics can be derived from the exact solution cost of a relaxed version of the problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 gives the shortest solution Key: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem.

31 Heuristics for other problems Shortest path from one city to another Straight line distance Touring problem: visit every city exactly once Challenge: What is an admissable heuristic for the word problem?

32 End of Class Questions

33 Online Search Agent operates by interleaving computation and action No time for thinking The agent only knows Actions (s) The step-cost function c(s,a,s’) Goal-text (s) Cannot access the successors of a state without trying all actions

34 Assumptions Agent recognizes a state it has seen before Actions are deterministic Admissable heuristics Competitive ratio: Compare cost that agent actually travels with cost of the actual shortest path

35 What properties of search are desirable? Will A* work? Expand nodes in a local order Depth first Variant of greedy search Difference from offline search: Agent must physically backtrack Record states to which agent can backtrack and has not yet explored

36 Depth-first OPEN = start node; CLOSED = empty While OPEN is not empty do Remove leftmost state from OPEN, call it X If X = goal state, return success Put X on CLOSED SUCCESSORS = Successor function (X) Remove any successors on OPEN or CLOSED Put remaining successors on left end of OPEN End while

37 Online DFS - setup Inputs: s’, a percept that identifies the current state Static: result, a table indexed by action and state, initially empty unexplored: a table that lists, for each visited state, the actions not yet tried unbacktracked: a table that lists, for each visited state, the backtracks not yet tried s,a: the previous state and action, initially null

38 Online DFS – the algorithm If Goal-test(s’) then return stop If s’ is a new state then unexplored[s’]  actions(s’) If s is not null then do –result[a,s]  s’ –Add s to the front of unbacktracked[s’] If unexplored[s’] is empty –Then return stop –Else a  action b such that result[b,s’]=pop(unbacktracked[s’]) Else a  pop(unexplored[s’]) s  s’ Return a

39 Homework