Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Informed search algorithms
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
3/3 Factoid for the day: “Most people have more than the average number of feet” & eyes & ears & noses.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture– 4, 5, 6: A* properties 9 th,10 th and 12 th January,
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Informed (Heuristic) Search Algorithms. Homework #1 assigned due 10/4 before Exam 1 2.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
Review: Search problem formulation
8/29. Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working address for class list Blog posting issues Recitation session.
9/9. Num iterations: (d+1) Asymptotic ratio of # nodes expanded by IDDFS vs DFS (b+1)/ (b-1) (approximates to 1 when b is large)
Artificial Intelligence
Administrivia/Announcements Project 0 will be taken until Friday 4:30pm –If you don’t submit in the class, you submit to the dept office and ask them.
3/3 Factoid for the day: “Most people have more than the average number of feet” & eyes & ears & noses.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Lecture 3 Informed Search CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
9/5 9/5: (today) Lisp Assmt due 9/6: 3:30pm: Lisp Recitation [Lei] 9/7:~6pm: HW/Class recitation [Will] 9/12: HW1 Due.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
Computer Science CPSC 322 Lecture Heuristic Search (Ch: 3.6, 3.6.1) Slide 1.
Computer Science CPSC 322 Lecture A* and Search Refinements (Ch 3.6.1, 3.7.1, 3.7.2) Slide 1.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Heuristic Search Andrea Danyluk September 16, 2013.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
A General Introduction to Artificial Intelligence.
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
Informed Search CSE 473 University of Washington.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
For Monday Read chapter 4 exercise 1 No homework.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
EA C461 – Artificial Intelligence
Informed search algorithms
CS621: Artificial Intelligence
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
CS 416 Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Class of 28 th August

Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and 13 th. –Makeup classes next week (probably replayed in the class later) Lisp intro continuation?

Review

This one already assumes that the “sensors  features” mapping has been done! Even basic survival needs state information.. Review

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Inference Logical resolution Bayesian inference How the course topics stack up…

Class of 30 th August

Announcements Makeup class (Will be on CSP) -9-11am Wed (GWC 308) OR -1:30-3:30pm Wed (GWC 110) -Do you want replay of the tape in class or put the tape in Media library?

What happens when the domain Is inaccessible?

Search in Multi-state (inaccessible) version

General Search ??

Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node

Class of 4 th September

Announcements Makeup class (Will be on CSP) -9-11am Wed (GWC 308) -_CSP lecture -1:30-3:30pm Wed (GWC 110) – Project 1 lecture

Explained This without Showing this slide

A B C DFS: BFS: IDDFS: A,B,C,D,G A,B,G (A), (A, B, G) D G Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space.

Explained the techniques for handling Repeated node expansion Main points: --repeated expansions is a bigger issue for DFS than for BFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential --Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)

A B C D G Uniform Cost Search No:A (0) N1:B(1)N2:G(9) N3:C(2)N4:D(3)N5:G(5) Completeness? Optimality? Branch & Bound argument (as long as all op costs are +ve) Efficiency? (as bad as blind search..) A B C D G Bait & Switch Graph Proof of optimality: next page.

“Informing” Uniform search… A B C D G Bait & Switch Graph No:A (0) N1:B(.1)N2:G(9) N3:C(.2)N4:D(.3)N5:G(25.3) Would be nice if we could tell that N2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality? --h1(n) <= h2(n) <= h*(n) Which is better function? Admissibility Informedness

Class of 6 th September

“Informing” Uniform search… A B C D G Bait & Switch Graph No:A (0) N1:B(.1)N2:G(9) N3:C(.2)N4:D(.3)N5:G(25.3) Would be nice if we could tell that N2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality? --h1(n) <= h2(n) <= h*(n) Which is better function? Admissibility Informedness

h* h1 h4 h5 Admissibility/Informedness h2 h3 Max(h2,h3)

Proof of Optimality of Uniform search Proof of optimality: Let N be the goal node we output. Suppose there is another goal node N’ We want to prove that g(N’) >= g(N) Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1 When N was picked up for expansion, Either N’ itself, or some ancestor of N’, Say N’’ must have been on the search queue If we picked N instead of N’’ for expansion, It was because g(N) <= g(N’’) ---Fact f1 But g(N’) = g(N’’) + dist(N’’,N’) So g(N’) >= g(N’’) So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1 No N N’ N’’ Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost

Proof of Optimality of A* search Proof of optimality: Let N be the goal node we output. Suppose there is another goal node N’ We want to prove that g(N’) >= g(N) Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1 When N was picked up for expansion, Either N’ itself, or some ancestor of N’, Say N’’ must have been on the search queue If we picked N instead of N’’ for expansion, It was because f(N) <= f(N’’) ---Fact f1 i.e. g(N) + h(N) <= g(N’’) + h(N’’) Since N is goal node, h(N) = 0 So, g(N) <= g(N’’) + h(N’’) But g(N’) = g(N’’) + dist(N’’,N’) Given h(N’) <= h*(N’’) = dist(N’’,N’) (lower bound) So g(N’) = g(N’’)+dist(N’’,N’) >= g(N’’) +h(N’’) ==Fact f2 So from f1 and f2 we have g(N) <= g(N’) But this contradicts our assumption A1 No N N’ N’’ Holds only because h(N’’) is a lower bound on dist(N’’,N’)

It will not expand Nodes with f >f* (f* is f-value of the Optimal goal)

Where do heuristics (bounds) come from? From relaxed problems (the more relaxed, the easier to compute heuristic, but the less accurate it is) For path planning on the plane (with obstacles)? For 8-puzzle problem? For Traveling sales person? Assume away obstacles. The distance will then be The straightline distance Assume ability to move the tile directly to the place distance= # misplaced tiles Assume ability to move only one position at a time distance = Sum of manhattan distances. Relax the “circuit” requirement. Minimum spanning tree

IDA* to handle the A* memory problem Basicaly IDDFS, except instead of the iterations being defined in terms of depth, we define it in terms of f-value –Start with the f cutoff equal to the f-value of the root node –Loop Generate and search all nodes whose f-values are less than or equal to current cutoff. –Use depth-first search to search the trees in the individual iterations –Keep track of the node N’ which has the smallest f- value that is still larger than the current cutoff. Let this f-value be next-largest-f-value -- If the search finds a goal node, terminate. If not, set cutoff = next-largest-f-value and go back to Loop Properties: Linear memory. #Iterations in the worst case? = B d !!  (Happens when all nodes have distinct f-values)