Download presentation
Presentation is loading. Please wait.
1
Problem Solving as Search
2
Chapter 3 : Problem Solving by Searching
“In which we see how an agent can find a sequence of actions that achieves its goals when no single action will do.” Such agents must be able to: Formulate a goal Formulate the overall problem Find a solution
3
Appropriate environment for Searching Agents
Observable?? Deterministic?? Episodic?? Static?? Discrete?? Agents?? Yes Either
4
Problem Types Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in Solution is a sequence Non-observable conformant problem Agent may have no idea where it is Solution (if any) is a sequence Nondeterministic and/or partially observable contingency problem percepts provide new information about current state solution is a tree or policy often interleave search, execution Unknown state space exploration problem ( “online” )
5
Problem Types Example: vacuum world Start in #5. Solution??
[Right, Suck]
6
Problem Types Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in Solution is a sequence Non-observable conformant problem Agent may have no idea where it is Solution (if any) is a sequence Nondeterministic and/or partially observable contingency problem percepts provide new information about current state solution is a tree or policy often interleave search, execution Unknown state space exploration problem ( “online” )
7
Problem Types Conformant, start in {1,2,3,4,5,6,7,8} Solution??
8
Problem Types Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}. [Right, Suck, Left, Suck]
9
Problem Types Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in Solution is a sequence Non-observable conformant problem Agent may have no idea where it is Solution (if any) is a sequence Nondeterministic and/or partially observable contingency problem percepts provide new information about current state solution is a tree or policy often interleave search, execution Unknown state space exploration problem ( “online” )
10
Problem Types Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet Local Sensing: dirt, location only. Solution?? [Right, if dirt then Suck]
11
Problem Types Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in Solution is a sequence Non-observable conformant problem Agent may have no idea where it is Solution (if any) is a sequence Nondeterministic and/or partially observable contingency problem percepts provide new information about current state solution is a tree or policy often interleave search, execution Unknown state space exploration problem “online” search
12
Problem Types Deterministic, fully observable single-state problem
Non-observable conformant problem Nondeterministic and/or partially observable contingency problem Unknown state space exploration problem
13
Single-State Problem Formulation
A problem is defined by four items: initial state successor function (which actually defines all reachable states) goal test path cost (additive) e.g., sum of distances, number of actions executed, etc. C(x,a,y) is the step cost, assumed to be 0
14
Consider this problem Five missionaries and five cannibals
Want to cross a river using one canoe. Canoe can hold up to three people. Can never be more cannibals than missionaries on either side of the river. Aim: To get all safely across the river without any missionaries being eaten. States?? Actions?? Goal test?? Path cost??
15
Problem Representation : Cannibals and Missionaries
Initial State We can show number of cannibals, missionaries and canoes on each side of the river. Start state is therefore: [5,5,1,0,0,0]
16
Problem Representation : Cannibals and Missionaries
Initial State However, since the system is closed, we only need to represent one side of the river, as we can deduce the other side. We will represent the starting side of the river, and omit the ending side. So start state is: [5,5,1]
17
Problem Representation : Cannibals and Missionaries
Goal State(s) [0,0,0] TECHNICALLY also [0,0,1]
18
Problem Representation : Cannibals and Missionaries
Successor Function (9 actions) Move one missionary. Move one cannibal. Move two missionaries. Move two cannibals. Move three missionaries. Move three cannibals. Move one missionary and one cannibal. Move one missionary and two cannibals. Move two missionaries and one cannibal.
19
Problem Representation : Cannibals and Missionaries
Successor Function (9 actions) Move one missionary. Move one cannibal. Move two missionaries. Move two cannibals. Move three missionaries. Move three cannibals. Move one missionary and one cannibal. Move one missionary and two cannibals. Move two missionaries and one cannibal.
20
Problem Representation : Cannibals and Missionaries
Successor Function To be a little more mathematical/computer like, we want to represent this in a true successor function format… S(state) state Move one cannibal across the river. S([x,y,1]) [x-1,y,0] S([x,y,0]) [x+1,y,1] [Note, this is a slight simplification. We also require, 0 x, y, [x+1 or x-1] 5 ]
21
Problem Representation : Cannibals and Missionaries
Successor Function S([x,y,0]) [x+1,y,1] //1 missionary S([x,y,1]) [x-1,y,0] … S([x,y,0]) [x+2,y,1] //2 missionaries S([x,y,1]) [x-2,y,0] S([x,y,0]) [x+1,y+1,1] // 1 of each S([x,y,1]) [x-1,y-1,0]
22
Problem Representation : Cannibals and Missionaries
Path Cost One unit per trip across the river.
23
Why do we look at “toy” problems.
Real world is absurdly complex state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions e.g., “Arad Zerind” represents a complex set of possible route, detours, rest stops, etc. For guaranteed realizability, any real state “in Arad” must get to some real state “in Zerind” (Abstract) solution = set of real paths that are solutions in the real world each abstract action should e “easier” than the original problem!
24
Implementation: General Tree Search
25
Tree Search Example
26
Tree Search Example
27
Uninformed Search Strategies
Uninformed strategies use only information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search
28
Uninformed Search Strategies
Uninformed strategies use only information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search
29
Breadth-First Search Expanding shallowest unexpanded node Implementation fringe is a FIFO queue, i.e., new successors go at end
30
Breadth-First Search Expanding shallowest unexpanded node Implementation fringe is a FIFO queue, i.e., new successors go at end
31
Breadth-First Search Expanding shallowest unexpanded node Implementation fringe is a FIFO queue, i.e., new successors go at end
32
Breadth-First Search Expanding shallowest unexpanded node Implementation fringe is a FIFO queue, i.e., new successors go at end
33
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
34
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
35
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
36
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
37
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
38
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
39
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
40
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
41
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
42
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
43
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
44
Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, I.e., put successors at front
45
Search Strategies A strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness – does it always find a solution if one exists? time complexity – number of nodes generated/expanded space complexity – maximum number of nodes in memory optimality – does it always find a least-cost solution
46
Search Strategies Time and space complexity are measured in terms of b – maximum branching factor of the search tree d – depth of the least-cost solution m – maximum depth of the state space (may be infinite)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.