CSE 511a: Artificial Intelligence Spring 2012

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Review: Search problem formulation
Uninformed search strategies
Lirong Xia Basic search Friday, Jan 24, TA OH –Joe Johnson (mainly in charge of the project): Wed 12:30-1:30 pm, Amos Eaton 217 –Hongzhao Huang.
Problem Solving and Search Andrea Danyluk September 9, 2013.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
Tamara Berg CS Artificial Intelligence
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
Solving problems by searching
1 Lecture 3 Uninformed Search
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld,
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Lab 3 How’d it go?.
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
PEAS: Medical diagnosis system  Performance measure  Patient health, cost, reputation  Environment  Patients, medical staff, insurers, courts  Actuators.
Artificial Intelligence
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
CSE 573: Artificial Intelligence Autumn 2012 Heuristic Search With slides from Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Dan Weld.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
yG7s#t=15 yG7s#t=15.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
A General Introduction to Artificial Intelligence.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CS 343H: Artificial Intelligence
CSE 473: Artificial Intelligence Spring 2012 Search: Cost & Heuristics Luke Zettlemoyer Lecture adapted from Dan Klein’s slides Multiple slides from Stuart.
Solving problems by searching A I C h a p t e r 3.
AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.
Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
Uninformed search Lirong Xia Spring, Uninformed search Lirong Xia Spring, 2017.
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CSE 473: Artificial Intelligence Spring 2012
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Presentation transcript:

CSE 511a: Artificial Intelligence Spring 2012 Lecture 2: Queue-Based Search 1/16/2012 Robert Pless – Wash U. Multiple slides over the course adapted from Kilian Weinberger, Dan Klein (or Stuart Russell or Andrew Moore)

Announcements Don’t delay, project 1 will be coming soon. Project 0: Python Tutorial Posted. * Due Thursday, Jan 24, midnight. Don’t delay, project 1 will be coming soon.

Question from last class How many humans fail the Turing test? Jon Stewart Interview, "The Most Human Human" Loebner contest is a P v P (?) test, Judges converse with human and chatbot, then have to name the human. Best current chatbot wins 29% of the time (so, humans lose 29%). CAPTCHAs are a very weak, computer graded Turing tests… human success rates 75 – 95%

Today Agents that Plan Ahead Search Problems Uninformed Search Methods (part review for some) Depth-First Search Breadth-First Search Uniform-Cost Search Heuristic Search Methods (new for all) Greedy Search But first, Pacman demo

Review What is a rational Agent?

Review What is a rational Agent? Agent acts to achieve the best expected outcome given its current knowledge about the world.

Reflex Agents Reflex agents: Can a reflex agent be rational? Choose action based on current percept May have memory or model of the world’s current state Do not consider the future consequences of their actions Act on how the world IS Can a reflex agent be rational? Programs for reflex agents may often be characterized as lookup tables. IF (situation A) – THEN (action b) Or Look at situationa A – and compute action b Can a reflex agent be rational? === can a lookup table maximize your expected value? only for finite, simple worlds. For any finite world – TIC TAC TOE, easy to make this lookup table perfect. For general world, table is way to big, and reflex behaviors are likely to look foolish.

Goal Based Agents Goal-based agents: Plan ahead Ask “what if” Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Act on how the world WOULD BE

Search Problems A search problem consists of: A state space A successor function A start state and a goal test A cost function (for now we set cost=1 for all steps) A solution is a sequence of actions (a plan) which transforms the start state to a goal state “N”, 1.0 “E”, 1.0 Where are the problems you might have to solve in general? HOW do you represent the state space? What is your data structure? … if the state space is a grid of dots and a pacman position, (array + x,y) is an obvious choice. For many other problems, this is itself hard. (often less hard for games, the games *define* simple state spaces, and often suggest a way to represent them.).

Example: Romania State space: Successor function: Start state: Cities Successor function: Go to adj city with cost = dist Start state: Arad Goal test: Is state == Bucharest? Solution?

State Space Sizes? Search Problem: Eat all of the food Pacman positions: 10 x 12 = 120 Food count: 30 Ghost positions: 12 Pacman facing: up, down, left, right 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 = 536 870 912 90 ways you can not be on a food dot spot * 2^30 ways the food dots can be arranged. + 30 ways you *can* be on a food dot spot * 2^29 ways the other food dots can be arranged. If you’re on a food dot spot then that spot has no food, so there are 2^29 choices for possible food.

Tree Search

Ridiculously tiny search graph for a tiny search problem State Space Graphs State space graph: A mathematical representation of a search problem Every search problem, has a corresponding state space graph The successor function is represented by arcs We can rarely build this graph in memory (so we don’t) S G d b p q c e h a f r Ridiculously tiny search graph for a tiny search problem

State Space Sizes? Search Problem: Eat all of the food Pacman positions: 10 x 12 = 120 Food count: 30 Ghost positions: 12 Pacman facing: up, down, left, right 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 = 536 870 912 == 90 options when pacman is NOT on a food spot * 2^30 ways that the food could could be filled (or not) + 30 food splots that pacman could be on * 2^29 ways the remaining food spots could be filled (or not) (12 choose 2) ghost positions * 4 ways the pacman could be oriented BUT these last considerations can be ignored… because pacman orientation does NOT affect avallable actions … it only helps us to watch the screen, and the ghosts are stuck… in a place where they can’t get at pacman anyway…

Search Trees A search tree: “N”, 1.0 “E”, 1.0 A search tree: This is a “what if” tree of plans and outcomes Start state at the root node Children correspond to successors Nodes contain states, correspond to PLANS to those states For most problems, we can never actually build the whole tree

Another Search Tree Search: Expand out possible plans Maintain a fringe of unexpanded plans Try to expand as few tree nodes as possible

Detailed pseudocode is in the book! General Tree Search Important ideas: Fringe Expansion Exploration strategy Main question: which fringe nodes to explore? Detailed pseudocode is in the book!

Example: Tree Search S G d b p q c e h a f r

State Graphs vs. Search Trees d b p q c e h a f r Each NODE in in the search tree is an entire PATH in the problem graph. S a b d p c e h f r q G We construct both on demand – and we construct as little as possible.

States vs. Nodes Problem States Search Nodes Nodes in state space graphs are problem states Represent an abstracted state of the world Have successors, can be goal / non-goal, have multiple predecessors Nodes in search trees are plans Represent a plan (sequence of actions) which results in the node’s state Have a problem state and one parent, a path length, a depth & a cost The same problem state may be achieved by multiple search tree nodes Problem States Search Nodes Parent Depth 5 Action Node Depth 6

Review: Depth First Search G d b p q c e h a f r a Strategy: expand deepest node first Implementation: Fringe is a LIFO stack b c e d f h p r q S a b d p c e h f r q G

Review: Breadth First Search G d b p q c e h a f r Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue Search Tiers S a b d p c e h f r q G

Analysis of Search Algorithms [demo]

Search Algorithm Properties Complete? Guaranteed to find a solution if one exists? Optimal? Guaranteed to find the least cost path? Time complexity? Space complexity? Variables: n Number of states in the problem b The average branching factor B (the average number of successors) C* Cost of least cost solution s Depth of the shallowest solution m Max depth of the search tree

DFS Infinite paths make DFS incomplete… How can we fix this? O(BLMAX) Algorithm Complete Optimal Time Space DFS Depth First Search N O(BLMAX) O(LMAX) N N Infinite Infinite b START a GOAL Infinite paths make DFS incomplete… How can we fix this?

DFS With cycle checking, DFS is complete.* Algorithm Complete Optimal 1 node b … b nodes b2 nodes m tiers bm nodes At most “b” nodes in each of m layers Algorithm Complete Optimal Time Space DFS w/ Path Checking Y N O(bm+1) O(bm) * Or graph search – next lecture.

BFS Algorithm Complete Optimal Time Space DFS BFS When is BFS optimal? w/ Path Checking BFS When is BFS optimal? Y N O(bm+1) O(bm) Y N* O(bs+1) O(bs+1) 1 node b … b nodes s tiers b2 nodes BFS: could have almost all the nodes on the frontier, have to keep track of them, so b^s space bs nodes bm nodes

Comparisons When will BFS outperform DFS? When will DFS outperform BFS? BFS better than DFS when solution is nearby, and branch factor is high? DFS is better than BFS when many paths lead to a solution, and the solution is far away

Iterative Deepening Algorithm Complete Optimal Time Space DFS BFS ID Y Iterative deepening uses DFS as a subroutine: Do a DFS which only searches for paths of length 1 or less. If “1” failed, do a DFS which only searches paths of length 2 or less. If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. b … Idea: *If* growth is exponential, the cost of the last step dominates, so you can re-do all previous work for “free”. If the base b is 2 Algorithm Complete Optimal Time Space DFS w/ Path Checking BFS ID Y N O(bm+1) O(bm) Y N* O(bs+1) O(bs+1) Y N* O(bs+1) O(bs)

Cost sensitive search

Costs on Actions START GOAL d b p q c e h a f r 2 9 8 1 3 4 15 Notice that BFS finds the shortest path in terms of number of transitions. It does not find the least-cost path. We will quickly cover an algorithm which does find the least-cost path.

Uniform Cost Search S G 2 Expand cheapest node first: b p q c e h a f r 2 Expand cheapest node first: Fringe is a priority queue 1 8 2 2 3 9 1 8 1 1 15 S a b d p c e h f r q G 9 1 3 4 5 17 11 16 11 Cost contours 6 13 7 8 11 10

Priority Queue Refresher A priority queue is a data structure in which you can insert and retrieve (key, value) pairs with the following operations: pq.push(key, value) inserts (key, value) into the queue. pq.pop() returns the key with the lowest value, and removes it from the queue. You can decrease a key’s priority by pushing it again Unlike a regular queue, insertions aren’t constant time, usually O(log n) We’ll need priority queues for cost-sensitive search methods

Uniform Cost Search Algorithm Complete Optimal Time Space DFS BFS UCS w/ Path Checking BFS UCS Y N O(bm+1) O(bm) Y N O(bs+1) O(bs+1) Y* Y O(b1+C*/) O(b1+C*/) C*/ tiers b … * UCS can fail if actions can get arbitrarily cheap All these are bounded also by number of possible states, N, which isn’t in the above charts and is assumed to be huge.

Search Heuristics

Uniform Cost Issues Remember: explores increasing cost contours The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location … c  1 c  2 c  3 python pacman.py -l contoursMaze -p SearchAgent -a fn=ucs --frameTime -1 python pacman.py -p SearchAgent -a fn=ucs -l smallMaze --frameTime -1 Start Goal [demo]

Search Heuristics Any estimate of how close a state is to a goal Designed for a particular search problem Examples: Manhattan distance, Euclidean distance 10 5 11.2

Heuristics

Best First / Greedy Search Expand the node that seems closest… What can go wrong?

Best First / Greedy Search A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS in the worst case Can explore everything Can get stuck in loops if no cycle checking Like DFS in completeness (finite states w/ cycle checking) b … b … [demo]

Search Gone Wrong?

Alternative pacman Video

Extra Work? Failure to detect repeated states can cause exponentially more work (why?)

Graph Search In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p c e h f r q G

Graph Search Very simple fix: never expand a state type twice Can this wreck completeness? Why or why not? How about optimality? Why or why not?

Some Hints Graph search is almost always better than tree search (when not?) Implement your closed list as a dict or set! Nodes are conceptually paths, but better to represent with a state, cost, last action, and reference to the parent node

Best First Greedy Search Algorithm Complete Optimal Time Space Greedy Best-First Search Y* N O(bm) O(bm) b … m What do we need to do to make it complete? Can we make it optimal? Next class!

Uniform Cost Search What will UCS do for this graph? What does this mean for completeness? b 1 START a 1 GOAL

Best First / Greedy Search Strategy: expand the closest node to the goal S G d b p q c e h a f r 2 9 8 1 3 5 4 15 h=0 h=8 h=4 h=5 h=11 e h=8 python pacman.py -p SearchAgent -a fn=greedySearch,heuristic=manhattanHeuristic -l smallMaze --frameTime -1 python pacman.py -p SearchAgent -a fn=greedySearch,heuristic=euclideanHeuristic -l contoursMaze --frameTime -1 h=4 h=6 h=12 h=11 h=6 h=9 [demo: greedy]

Example: Tree Search G a c b e d f S h p r q Use class participation expansion strategy, but organize alphabetically p r q

5 Minute Break A Dan Gillick original