Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 511a: Artificial Intelligence Spring 2012

Similar presentations


Presentation on theme: "CSE 511a: Artificial Intelligence Spring 2012"— Presentation transcript:

1 CSE 511a: Artificial Intelligence Spring 2012
Lecture 2: Queue-Based Search 1/16/2012 Robert Pless – Wash U. Multiple slides over the course adapted from Kilian Weinberger, Dan Klein (or Stuart Russell or Andrew Moore)

2 Announcements Don’t delay, project 1 will be coming soon.
Project 0: Python Tutorial Posted. * Due Thursday, Jan 24, midnight. Don’t delay, project 1 will be coming soon.

3 Question from last class
How many humans fail the Turing test? Jon Stewart Interview, "The Most Human Human" Loebner contest is a P v P (?) test, Judges converse with human and chatbot, then have to name the human. Best current chatbot wins 29% of the time (so, humans lose 29%). CAPTCHAs are a very weak, computer graded Turing tests… human success rates 75 – 95%

4 Today Agents that Plan Ahead Search Problems
Uninformed Search Methods (part review for some) Depth-First Search Breadth-First Search Uniform-Cost Search Heuristic Search Methods (new for all) Greedy Search But first, Pacman demo

5 Review What is a rational Agent?

6 Review What is a rational Agent?
Agent acts to achieve the best expected outcome given its current knowledge about the world.

7 Reflex Agents Reflex agents: Can a reflex agent be rational?
Choose action based on current percept May have memory or model of the world’s current state Do not consider the future consequences of their actions Act on how the world IS Can a reflex agent be rational? Programs for reflex agents may often be characterized as lookup tables. IF (situation A) – THEN (action b) Or Look at situationa A – and compute action b Can a reflex agent be rational? === can a lookup table maximize your expected value? only for finite, simple worlds. For any finite world – TIC TAC TOE, easy to make this lookup table perfect. For general world, table is way to big, and reflex behaviors are likely to look foolish.

8 Goal Based Agents Goal-based agents: Plan ahead Ask “what if”
Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Act on how the world WOULD BE

9 Search Problems A search problem consists of:
A state space A successor function A start state and a goal test A cost function (for now we set cost=1 for all steps) A solution is a sequence of actions (a plan) which transforms the start state to a goal state “N”, 1.0 “E”, 1.0 Where are the problems you might have to solve in general? HOW do you represent the state space? What is your data structure? … if the state space is a grid of dots and a pacman position, (array + x,y) is an obvious choice. For many other problems, this is itself hard. (often less hard for games, the games *define* simple state spaces, and often suggest a way to represent them.).

10 Example: Romania State space: Successor function: Start state:
Cities Successor function: Go to adj city with cost = dist Start state: Arad Goal test: Is state == Bucharest? Solution?

11 State Space Sizes? Search Problem: Eat all of the food
Pacman positions: 10 x 12 = 120 Food count: 30 Ghost positions: 12 Pacman facing: up, down, left, right 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 = 90 ways you can not be on a food dot spot * 2^30 ways the food dots can be arranged. + 30 ways you *can* be on a food dot spot * 2^29 ways the other food dots can be arranged. If you’re on a food dot spot then that spot has no food, so there are 2^29 choices for possible food.

12 Tree Search

13 Ridiculously tiny search graph for a tiny search problem
State Space Graphs State space graph: A mathematical representation of a search problem Every search problem, has a corresponding state space graph The successor function is represented by arcs We can rarely build this graph in memory (so we don’t) S G d b p q c e h a f r Ridiculously tiny search graph for a tiny search problem

14 State Space Sizes? Search Problem: Eat all of the food
Pacman positions: 10 x 12 = 120 Food count: 30 Ghost positions: 12 Pacman facing: up, down, left, right 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 = == 90 options when pacman is NOT on a food spot * 2^30 ways that the food could could be filled (or not) + 30 food splots that pacman could be on * 2^29 ways the remaining food spots could be filled (or not) (12 choose 2) ghost positions * 4 ways the pacman could be oriented BUT these last considerations can be ignored… because pacman orientation does NOT affect avallable actions … it only helps us to watch the screen, and the ghosts are stuck… in a place where they can’t get at pacman anyway…

15 Search Trees A search tree:
“N”, 1.0 “E”, 1.0 A search tree: This is a “what if” tree of plans and outcomes Start state at the root node Children correspond to successors Nodes contain states, correspond to PLANS to those states For most problems, we can never actually build the whole tree

16 Another Search Tree Search: Expand out possible plans
Maintain a fringe of unexpanded plans Try to expand as few tree nodes as possible

17 Detailed pseudocode is in the book!
General Tree Search Important ideas: Fringe Expansion Exploration strategy Main question: which fringe nodes to explore? Detailed pseudocode is in the book!

18 Example: Tree Search S G d b p q c e h a f r

19 State Graphs vs. Search Trees
d b p q c e h a f r Each NODE in in the search tree is an entire PATH in the problem graph. S a b d p c e h f r q G We construct both on demand – and we construct as little as possible.

20 States vs. Nodes Problem States Search Nodes
Nodes in state space graphs are problem states Represent an abstracted state of the world Have successors, can be goal / non-goal, have multiple predecessors Nodes in search trees are plans Represent a plan (sequence of actions) which results in the node’s state Have a problem state and one parent, a path length, a depth & a cost The same problem state may be achieved by multiple search tree nodes Problem States Search Nodes Parent Depth 5 Action Node Depth 6

21 Review: Depth First Search
G d b p q c e h a f r a Strategy: expand deepest node first Implementation: Fringe is a LIFO stack b c e d f h p r q S a b d p c e h f r q G

22 Review: Breadth First Search
G d b p q c e h a f r Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue Search Tiers S a b d p c e h f r q G

23 Analysis of Search Algorithms
[demo]

24 Search Algorithm Properties
Complete? Guaranteed to find a solution if one exists? Optimal? Guaranteed to find the least cost path? Time complexity? Space complexity? Variables: n Number of states in the problem b The average branching factor B (the average number of successors) C* Cost of least cost solution s Depth of the shallowest solution m Max depth of the search tree

25 DFS Infinite paths make DFS incomplete… How can we fix this? O(BLMAX)
Algorithm Complete Optimal Time Space DFS Depth First Search N O(BLMAX) O(LMAX) N N Infinite Infinite b START a GOAL Infinite paths make DFS incomplete… How can we fix this?

26 DFS With cycle checking, DFS is complete.* Algorithm Complete Optimal
1 node b b nodes b2 nodes m tiers bm nodes At most “b” nodes in each of m layers Algorithm Complete Optimal Time Space DFS w/ Path Checking Y N O(bm+1) O(bm) * Or graph search – next lecture.

27 BFS Algorithm Complete Optimal Time Space DFS BFS When is BFS optimal?
w/ Path Checking BFS When is BFS optimal? Y N O(bm+1) O(bm) Y N* O(bs+1) O(bs+1) 1 node b b nodes s tiers b2 nodes BFS: could have almost all the nodes on the frontier, have to keep track of them, so b^s space bs nodes bm nodes

28 Comparisons When will BFS outperform DFS?
When will DFS outperform BFS? BFS better than DFS when solution is nearby, and branch factor is high? DFS is better than BFS when many paths lead to a solution, and the solution is far away

29 Iterative Deepening Algorithm Complete Optimal Time Space DFS BFS ID Y
Iterative deepening uses DFS as a subroutine: Do a DFS which only searches for paths of length 1 or less. If “1” failed, do a DFS which only searches paths of length 2 or less. If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. b Idea: *If* growth is exponential, the cost of the last step dominates, so you can re-do all previous work for “free”. If the base b is 2 Algorithm Complete Optimal Time Space DFS w/ Path Checking BFS ID Y N O(bm+1) O(bm) Y N* O(bs+1) O(bs+1) Y N* O(bs+1) O(bs)

30 Cost sensitive search

31 Costs on Actions START GOAL d b p q c e h a f r 2 9 8 1 3 4 15 Notice that BFS finds the shortest path in terms of number of transitions. It does not find the least-cost path. We will quickly cover an algorithm which does find the least-cost path.

32 Uniform Cost Search S G 2 Expand cheapest node first:
b p q c e h a f r 2 Expand cheapest node first: Fringe is a priority queue 1 8 2 2 3 9 1 8 1 1 15 S a b d p c e h f r q G 9 1 3 4 5 17 11 16 11 Cost contours 6 13 7 8 11 10

33 Priority Queue Refresher
A priority queue is a data structure in which you can insert and retrieve (key, value) pairs with the following operations: pq.push(key, value) inserts (key, value) into the queue. pq.pop() returns the key with the lowest value, and removes it from the queue. You can decrease a key’s priority by pushing it again Unlike a regular queue, insertions aren’t constant time, usually O(log n) We’ll need priority queues for cost-sensitive search methods

34 Uniform Cost Search Algorithm Complete Optimal Time Space DFS BFS UCS
w/ Path Checking BFS UCS Y N O(bm+1) O(bm) Y N O(bs+1) O(bs+1) Y* Y O(b1+C*/) O(b1+C*/) C*/ tiers b * UCS can fail if actions can get arbitrarily cheap All these are bounded also by number of possible states, N, which isn’t in the above charts and is assumed to be huge.

35 Search Heuristics

36 Uniform Cost Issues Remember: explores increasing cost contours
The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location c  1 c  2 c  3 python pacman.py -l contoursMaze -p SearchAgent -a fn=ucs --frameTime -1 python pacman.py -p SearchAgent -a fn=ucs -l smallMaze --frameTime -1 Start Goal [demo]

37 Search Heuristics Any estimate of how close a state is to a goal
Designed for a particular search problem Examples: Manhattan distance, Euclidean distance 10 5 11.2

38 Heuristics

39 Best First / Greedy Search
Expand the node that seems closest… What can go wrong?

40 Best First / Greedy Search
A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS in the worst case Can explore everything Can get stuck in loops if no cycle checking Like DFS in completeness (finite states w/ cycle checking) b b [demo]

41 Search Gone Wrong?

42 Alternative pacman Video

43

44 Extra Work? Failure to detect repeated states can cause exponentially more work (why?)

45 Graph Search In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p c e h f r q G

46 Graph Search Very simple fix: never expand a state type twice
Can this wreck completeness? Why or why not? How about optimality? Why or why not?

47 Some Hints Graph search is almost always better than tree search (when not?) Implement your closed list as a dict or set! Nodes are conceptually paths, but better to represent with a state, cost, last action, and reference to the parent node

48 Best First Greedy Search
Algorithm Complete Optimal Time Space Greedy Best-First Search Y* N O(bm) O(bm) b m What do we need to do to make it complete? Can we make it optimal? Next class!

49 Uniform Cost Search What will UCS do for this graph?
What does this mean for completeness? b 1 START a 1 GOAL

50 Best First / Greedy Search
Strategy: expand the closest node to the goal S G d b p q c e h a f r 2 9 8 1 3 5 4 15 h=0 h=8 h=4 h=5 h=11 e h=8 python pacman.py -p SearchAgent -a fn=greedySearch,heuristic=manhattanHeuristic -l smallMaze --frameTime -1 python pacman.py -p SearchAgent -a fn=greedySearch,heuristic=euclideanHeuristic -l contoursMaze --frameTime -1 h=4 h=6 h=12 h=11 h=6 h=9 [demo: greedy]

51

52 Example: Tree Search G a c b e d f S h p r q
Use class participation expansion strategy, but organize alphabetically p r q

53 5 Minute Break A Dan Gillick original


Download ppt "CSE 511a: Artificial Intelligence Spring 2012"

Similar presentations


Ads by Google