Download presentation
Presentation is loading. Please wait.
Published byChristopher Buck Bryant Modified over 9 years ago
1
PEAS: Medical diagnosis system Performance measure Patient health, cost, reputation Environment Patients, medical staff, insurers, courts Actuators Screen display, email Sensors Keyboard/mouse
2
Environment types PacmanBackgammonDiagnosisTaxi Fully or partially observable Single-agent or multiagent Deterministic or stochastic Static or dynamic Discrete or continuous Known or unknown
3
Agent design The environment type largely determines the agent design Partially observable => agent requires memory (internal state), takes actions to obtain information Stochastic => agent may have to prepare for contingencies, must pay attention while executing plans Multi-agent => agent may need to behave randomly Static => agent has enough time to compute a rational decision Continuous time => continuously operating controller
4
Agent types In order of increasing generality and complexity Simple reflex agents Reflex agents with state Goal-based agents Utility-based agents
5
Simple reflex agents
6
A reflex Pacman agent in Python class TurnLeftAgent(Agent): def getAction(self, percept): legal = percept.getLegalPacmanActions() current = percept.getPacmanState().configuration.direction if current == Directions.STOP: current = Directions.NORTH left = Directions.LEFT[current] if left in legal: return left if current in legal: return current if Directions.RIGHT[current] in legal: return Directions.RIGHT[current] if Directions.LEFT[left] in legal: return Directions.LEFT[left] return Directions.STOP
8
Pacman agent contd. Can we (in principle) extend this reflex agent to behave well in all standard Pacman environments?
9
Handling complexity Writing behavioral rules or environment models more difficult for more complex environments E.g., rules of chess (32 pieces, 64 squares, ~100 moves) ~100 000 000 000 000 000 000 000 000 000 000 000 000 pages as a state-to-state transition matrix (cf HMMs, automata) R.B.KB.RPPP..PPP..N..N…..PP….q.pp..Q..n..n..ppp..pppr.b.kb.r ~100 000 pages in propositional logic (cf circuits, graphical models) WhiteKingOnC4@Move12 … 1 page in first-order logic x,y,t,color,piece On(color,piece,x,y,t) …
10
Reflex agents with state
11
Goal-based agents
12
Utility-based agents
13
Summary An agent interacts with an environment through sensors and actuators The agent function, implemented by an agent program running on a machine, describes what the agent does in all circumstances PEAS descriptions define task environments; precise PEAS specifications are essential More difficult environments require more complex agent designs and more sophisticated representations
14
CS 188: Artificial Intelligence Search Instructor: Stuart Russell ]
15
Today Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search
16
Agents that plan ahead Planning agents: Decisions based on predicted consequences of actions Must have a transition model: how the world evolves in response to actions Must formulate a goal Spectrum of deliberativeness: Generate complete, optimal plan offline, then execute Generate a simple, greedy plan, start executing, replan when something goes wrong
17
Video of Demo Replanning
18
Video of Demo Mastermind
19
Search Problems
20
A search problem consists of: A state space For each state, a set Actions(s) of allowable actions A transition model Result(s,a) A step cost function c(s,a,s’) A start state and a goal test A solution is a sequence of actions (a plan) which transforms the start state to a goal state N E {N, E} 1 1
21
Search Problems Are Models
22
Example: Travelling in Romania State space: Cities Actions: Go to adjacent city Transition model Result(Go(B),A) = B Step cost Distance along road link Start state: Arad Goal test: Is state == Bucharest? Solution?
23
What’s in a State Space? Problem: Pathing States: (x,y) location Actions: NSEW Transition model: update location Goal test: is (x,y)=END Problem: Eat-All-Dots States: {(x,y), dot booleans} Actions: NSEW Transition model: update location and possibly a dot boolean Goal test: dots all false The real world state includes every last detail of the environment A search state abstracts away details not needed to solve the problem MN states MN2 MN states
24
Quiz: Safe Passage Problem: eat all dots while keeping the ghosts perma-scared What does the state space have to specify? (agent position, dot booleans, power pellet booleans, remaining scared time)
25
State Space Graphs and Search Trees
26
State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent transitions resulting from actions The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once! We can rarely build this full graph in memory (it’s too big), but it’s a useful idea
27
More examples
29
Search Trees A search tree: A “what if” tree of plans and their outcomes The start state is the root node Children correspond to possible action outcomes Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree “E”, 1.0“N”, 1.0 This is now / start Possible futures
30
State Space Graphs vs. Search Trees
31
Quiz: State Space Graphs vs. Search Trees S G b a Consider this 4-state graph: Important: Lots of repeated structure in the search tree! How big is its search tree (from S)? S a b G G ab G aGb
32
Tree Search
33
Search Example: Romania
34
Searching with a Search Tree Search: Expand out potential plans (tree nodes) Maintain a frontier of partial plans under consideration Try to expand as few tree nodes as possible
35
General Tree Search Important ideas: Frontier Expansion Exploration strategy Main question: which frontier nodes to explore? function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier
36
Depth-First Search
37
S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack
38
Search Algorithm Properties
39
Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths Number of nodes in entire tree? 1 + b + b 2 + …. b m = O(b m ) … b 1 node b nodes b 2 nodes b m nodes m tiers
40
Depth-First Search (DFS) Properties … b 1 node b nodes b 2 nodes b m nodes m tiers What nodes does DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(b m ) How much space does the frontier take? Only has siblings on path to root, so O(bm) Is it complete? m could be infinite, so only if we prevent cycles (more later) Is it optimal? No, it finds the “leftmost” solution, regardless of depth or cost
41
Breadth-First Search
42
S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r Search Tiers Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue
43
Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(b s ) How much space does the frontier take? Has roughly the last tier, so O(b s ) Is it complete? s must be finite if a solution exists, so yes! Is it optimal? Only if costs are all 1 (more on costs later) … b 1 node b nodes b 2 nodes b m nodes s tiers b s nodes
44
Quiz: DFS vs BFS
45
When will BFS outperform DFS? When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]
46
Video of Demo Maze Water DFS/BFS (part 1)
47
Video of Demo Maze Water DFS/BFS (part 2)
48
Iterative Deepening … b Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution… Run a DFS with depth limit 2. If no solution… Run a DFS with depth limit 3. ….. Isn’t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!
49
Finding a least-cost path BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path. START GOAL d b p q c e h a f r 2 9 2 81 8 2 3 2 4 4 15 1 3 2 2
50
Uniform Cost Search
51
S a b d p a c e p h f r q qc G a q e p h f r q qc G a Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) S G d b p q c e h a f r 3 9 1 16 4 11 5 7 13 8 1011 17 11 0 6 3 9 1 1 2 8 8 2 15 1 2 Cost contours 2
52
… Uniform Cost Search (UCS) Properties What nodes does UCS expand? Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least , then the “effective depth” is roughly C*/ Takes time O(b C*/ ) (exponential in effective depth) How much space does the frontier take? Has roughly the last tier, so O(b C*/ ) Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes! Is it optimal? Yes! (Proof next lecture via A*) b C*/ “tiers” c 3 c 2 c 1
53
Uniform Cost Issues Remember: UCS explores increasing cost contours The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location We’ll fix that soon! Start Goal … c 3 c 2 c 1
54
Video of Demo Empty UCS
55
Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 1)
56
Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 2)
57
Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 3)
58
Search Gone Wrong?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.