Lecture 8: Informed Search Dr John Levine 52236 Algorithms and Complexity February 27th 2006.

Slides:



Advertisements
Similar presentations
Artificial Intelligence: Knowledge Representation
Advertisements

Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
Lecture 12: Revision Lecture Dr John Levine Algorithms and Complexity March 27th 2006.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.4) January, 14, 2009.
CSC 423 ARTIFICIAL INTELLIGENCE
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 22, 2010.
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
State-Space Searches. State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Artificial Intelligence
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Using Search in Problem Solving
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Problem Solving and Search in AI Heuristic Search
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
DAST, Spring © L. Joskowicz 1 Data Structures – LECTURE 1 Introduction Motivation: algorithms and abstract data types Easy problems, hard problems.
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
For Friday Finish chapter 3 Homework: –Chapter 3, exercise 6 –May be done in groups. –Clarification on part d: an “action” must be running the program.
Solving Problems by Searching
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
3.0 State Space Representation of Problems 3.1 Graphs 3.2 Formulating Search Problems 3.3 The 8-Puzzle as an example 3.4 State Space Representation using.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Lecture 10: Class Review Dr John Levine Algorithms and Complexity March 13th 2006.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Lecture 6: Problem Solving Dr John Levine Algorithms and Complexity February 10th 2006.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Today’s Topics FREE Code that will Write Your PhD Thesis, a Best-Selling Novel, or Your Next Methods for Intelligently/Efficiently Searching a Space.
Informed (Heuristic) Search
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Lecture 7: Uninformed Search Dr John Levine Algorithms and Complexity February 13th 2006.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Search exploring the consequences of possible actions.
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Assignment 2 (from last week) This is a pen-and-paper exercise, done in the same groups as for Practical 1 Your answer can be typeset or hand-written There.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Advanced Artificial Intelligence Lecture 2: Search.
Computer Science CPSC 322 Lecture 6 Iterative Deepening and Search with Costs (Ch: 3.7.3, 3.5.3)
For Friday Read chapter 4, sections 1 and 2 Homework –Chapter 3, exercise 7 –May be done in groups.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Searching for Solutions
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Solving problems by searching A I C h a p t e r 3.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
February 18, 2016Introduction to Artificial Intelligence Lecture 8: Search in State Spaces III 1 A General Backtracking Algorithm Sanity check function.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Problem Solving by Searching
State-Space Searches.
State-Space Searches.
The Rich/Knight Implementation
State-Space Searches.
Problem Solving by Searching Search Methods :
The Rich/Knight Implementation
Presentation transcript:

Lecture 8: Informed Search Dr John Levine Algorithms and Complexity February 27th 2006

Notes on Practical 1 For Q1, try using System.nanoTime() instead of System.currentTimeMillis() For Q2, for the cubic algorithm, use the code provided (see next slide) For Q2, you are asked to find an O(n 2 ) algorithm by cutting out repeated computation For Q2, the best algorithm is O(n): I don’t expect you to find this, but if you do, extra credit will be given! Deadline: 5pm, Friday March 17th 2006 Please hand in a printed copy to my office (L13.21)

Code for Practical 1, Q2 public static int maxSubsequenceSumCubic( int [ ] a ) { int maxSum = 0; for( int i = 0; i < a.length; i++ ) for( int j = i; j < a.length; j++ ) { int thisSum = 0; for( int k = i; k <= j; k++ ) thisSum += a[ k ]; if( thisSum > maxSum ) { maxSum = thisSum; seqStart = i; seqEnd = j; } } return maxSum; } O(n 3 )

Assignment 2 (from last week) This is a pen-and-paper exercise, done in the same groups as for Practical 1 Your answer can be typeset or hand-written There are three questions on searching, covering material from Lectures 6, 7 and 8 (search) Deadline: 5pm, Monday 20th March 2006 Please hand in a printed or hand-written copy to my office (L13.21)

Question 1 Consider the following puzzle. You have 8 sticks, lined up as shown below. By moving exactly 4 sticks you are to achieve the configuration shown at (b). (a) (b)

Question 1 Each move consists of picking up a stick, jumping over exactly two other sticks, and then putting it down onto a fourth stick. In the figure below, stick e could be moved onto stick b or h, whereas stick f cannot move at all: a b c d e f g h

Question 1 a.Decide on a notation for representing states which you could use in state space search. b.Using this notation, map out the search space (states which are essentially identical due to symmetry need not be included). c.Apply the uninformed search algorithms (depth-first and breadth-first search) to the search tree. How many nodes do they search? How large does the agenda grow for each form of search? d.Why is the problem solved so much more easily if you search backwards from the goal state?

Question 2 A farmer needs to get a fox, a chicken and some corn across a river using a boat. The boat is small and leaky and can only take the farmer and one item of cargo. If the farmer leaves the fox and the chicken together, the fox will eat the chicken. If the farmer leaves the chicken and corn together, the chicken will eat the corn. Leaving the fox and corn together is fine. How can the farmer get all three items across the river?

Question 2 a.Decide on a notation for representing states which you could use in state space search. b.Give details of all the move operators for the search. Make sure that the operators only allow legal moves (i.e. ones which are physically possible and result in states in which no item of cargo gets eaten). c.Using your representation and operators, map out the search space for this problem. Is there more than one possible solution to this problem?

Question 3 Consider the following game for two players: There are 8 sticks on the table Players take turns at removing 1, 2 or 3 sticks The aim is to make your opponent take the last stick Example:

Question 3 a.Draw out the game tree for 8 sticks. Is this a win for the first player or the second player? b.If the first player’s first move is to take 2 sticks, what move should the second player make? c.What should the first player’s strategy be in the case where there are N sticks in the start state? For what values of N can the second player achieve a win?

The story so far Lecture 1: Why algorithms are so important Lecture 2: Big-Oh notation, a first look at different complexity classes (log, linear, log-linear, quadratic, polynomial, exponential, factorial) Lecture 3: simple search in a list of items is O(n) with unordered data but O(log n) with ordered data, and can be even faster with a good indexing scheme and parallel processors Lecture 4: sorting: random sort O(n!), naïve sort is O(n 2 ), bubblesort is O(n 2 ), quicksort is O(n log n)

The story so far Lecture 5: more on sorting: why comparison sort is O(n log n), doing better than O(n log n) by not doing comparisons (e.g. bucket sort) Lecture 6: harder search: how to represent a problem in terms of states and moves Lecture 7: uninformed search through states using an agenda: depth-first search and breadth-first search Lecture 8: making it smart: informed search using heuristics; how to use heuristic search without losing optimality – the A* algorithm

The story so far Lecture 5: more on sorting: why comparison sort is O(n log n), doing better than O(n log n) by not doing comparisons (e.g. bucket sort) Lecture 6: harder search: how to represent a problem in terms of states and moves Lecture 7: uninformed search through states using an agenda: depth-first search and breadth-first search Lecture 8: making it smart: informed search using heuristics; how to use heuristic search without losing optimality – the A* algorithm

State Space Search Problem solving using state space search consists of the following four steps: 1.Design a representation for states (including the initial state and the goal state) 2.Characterise the operators in terms of when they apply (preconditions) and the states they generate (effects) 3.Build a goal state recogniser 4.Search through the state space somehow by considering (in some or other order) the states reachable from the initial and goal states

Example: Blocks World A “classic” problem in AI planning The aim is to rearrange the blocks using the single robot arm so that the configuration in the goal state is achieved An optimal solution performs the transformation using as few steps as possible Any solution: linear complexity Optimal solution: exponential complexity (NP hard) C A B B A C

Search Spaces The search space of a problem is implicit in its formulation –You search the space of your representations We generate the space dynamically during search (including loops, dead ends, branches Operators are move generators We can represent the search space with trees Each node in the tree is a state When we call NextStates(S 0 )  [S 1,S 2,S 3 ], then we say we have expanded S 0

Depth-First Search S0S0 S3S3 S2S2 S1S1 S8S8 S5S5 S4S4 S9S9 S7S7 S6S6 S 10

Breadth-First Search S0S0 S3S3 S2S2 S1S1 S7S7 S6S6 S5S5 S4S4 S8S8 S 11 S 10 S9S9 S 12 S 13 S 14

Searching Using an Agenda When we expand a node we get multiple new nodes to expand, but we can only expand one at a time We keep track of the nodes still to be expanded using a data structure called an agenda When it is time to expand a new node, we choose the first node from the agenda As new states are discovered, we add them to the agenda somehow We can get different styles of search by altering how the states get added

Depth-First Search To get depth-first search, add the new nodes onto the start of the agenda (treat the agenda as a stack): let Agenda = [S 0 ] while Agenda ≠ [ ] do let Current = First (Agenda) let Agenda = Rest (Agenda) if Goal (Current) then return (“Found it!”) let Next = NextStates (Current) let Agenda = Next + Agenda

Breadth-First Search To get depth-first search, add the new nodes onto the end of the agenda (treat the agenda as a queue): let Agenda = [S 0 ] while Agenda ≠ [ ] do let Current = First (Agenda) let Agenda = Rest (Agenda) if Goal (Current) then return (“Found it!”) let Next = NextStates (Current) let Agenda = Agenda + Next

Heuristic Search DFS and BFS are both searching blind – they search all possibilities in an order dictated by NextStates(S i ) When people search, we look in the most promising places first – try { [a], [b], [c] }  { [a, b, c] } There are six possible moves, but somehow it seems like the best move is move(b,c) giving { [a], [b, c] } The most promising states are often those which are closest to the goal state, G But how can we know how close we are to the goal state?

Heuristic Search We can often estimate the distance from S i to G by using a heuristic function, f(S i,G) The function efficiently compares the two states and tries to get an estimate of how many moves remain without doing any searching For example, in the blocks world, all blocks that are stacked up in the correct place never have to move again; all blocks that need to move that are on the table only need to move once; and all other blocks only need to move at most twice: f(S i,G) = 2*B bad + 1*B table + 0*B good

Heuristic Function Exercise Consider a simple transportation problem: There are cities {a, b, c,…}, packages {p1, p2, p3,…}, and a truck (called tr1). Example state: { at(p1,a), at(p2,a), at(p3,c), at(tr1,b) } The operators are load(p), unload(p), and drive(x,y) Try to invent a heuristic function for estimating the distance between any two states Try it out on the above state and the goal state { at(p1,b), at(p2,b), at(p3,b) }

Best-First Search If we have a decent heuristic function, we should use it to direct the search somehow We still need to keep track of the nodes we haven’t yet expanded, using the agenda As new states are discovered, we add them to the agenda and record the value of the heuristic function When we pick the next node to explore, we choose the one which has the lowest value for the heuristic function (i.e. the one that looks nearest to the goal)

Best-First Search S0S0 S3S3 S2S2 S1S1 S8S8 S5S5 S4S4 S 10

Best-First Search To get best-first search, pick the best node on the agenda as the one to be explored next: let Agenda = [S 0 ] while Agenda ≠ [ ] do let Current = Best (Agenda) let Agenda = Rest (Agenda) if Goal (Current) then return (“Found it!”) let Next = NextStates (Current) let Agenda = Agenda + Next

Best-First Search and Algorithm A Best-first search can speed up the search by a very large factor, but can it isn’t guaranteed to return the shortest solution When deciding to expand a node, we need to take account of how long the path is so far, and add that on to the heuristic value: h(S i,G) = g(S 0,S i ) + f(S i,G) This will give a search which has elements of both breadth-first search and best-first search This type of search is called “Algorithm A”

Algorithm A* If f(S i,G) always underestimates the distance from Si to the goal, it is called an admissible heuristic If f(S i,G) is admissible, then Algorithm A will always return the shortest path (like breadth-first search) but will omit much of the work if the heuristic function is informative The use of an admissible heuristic turns Algorithm A into Algorithm A* Uses: problem solving, route finding, path planning in robotics, computer games, etc.