Download presentation
Presentation is loading. Please wait.
Published byChristian Shaw Modified over 9 years ago
1
CS B 659: I NTELLIGENT R OBOTICS Planning Under Uncertainty
2
Perception (e.g., filtering, SLAM) Prior knowledge (often probabilistic) Sensors Decision-making (control laws, optimization, planning) Offline learning (e.g., calibration) Actuators Actions
3
D EALING WITH U NCERTAINTY Sensing uncertainty Localization error Noisy maps Misclassified objects Motion uncertainty Noisy natural processes Odometry drift Imprecise actuators Uncontrolled agents (treat as state)
4
M OTION U NCERTAINTY g obstacle s
5
D EALING WITH M OTION U NCERTAINTY Hierarchical strategies Use generated trajectory as input into a low-level feedback controller Reactive strategies Approach #1: re-optimize when the state has diverged from the planned path ( online optimization) Approach #2: precompute optimal controls over state space, and just read off the new value from the perturbed state ( offline optimization) Proactive strategies Explicitly consider future uncertainty Heuristics (e.g., grow obstacles, penalize nearness) Markov Decision Processes Online and offline approaches
6
P ROACTIVE S TRATEGIES TO HANDLE M OTION U NCERTAINTY g obstacle s
7
D YNAMIC COLLISION AVOIDANCE ASSUMING WORST - CASE BEHAVIORS Optimizing safety in real- time under worst case behavior model T=1 T=2 TTPF=2.6 Robot
8
M ARKOV D ECISION P ROCESS A PPROACHES Alterovitz et al 2007
9
D EALING WITH S ENSING U NCERTAINTY Reactive heuristics often work well Optimistic costs Assume most-likely state Penalize uncertainty Proactive strategies Explicitly consider future uncertainty Active sensing Reward actions that yield information gain Partially Observable Markov Decision Processes (POMDPs)
10
10 Assuming no obstacles in the unknown region and taking the shortest path to the goal
11
11 Assuming no obstacles in the unknown region and taking the shortest path to the goal
12
12 Assuming no obstacles in the unknown region and taking the shortest path to the goal Works well for navigation because the space of all maps is too huge, and certainty is monotonically nondecreasing
13
13 Assuming no obstacles in the unknown region and taking the shortest path to the goal Works well for navigation because the space of all maps is too huge, and certainty is monotonically nondecreasing
14
14 What if the sensor was directed (e.g., a camera)?
15
15 What if the sensor was directed (e.g., a camera)?
16
16 What if the sensor was directed (e.g., a camera)?
17
17 What if the sensor was directed (e.g., a camera)?
18
18 What if the sensor was directed (e.g., a camera)?
19
19 What if the sensor was directed (e.g., a camera)?
20
20 What if the sensor was directed (e.g., a camera)?
21
21 What if the sensor was directed (e.g., a camera)?
22
22 What if the sensor was directed (e.g., a camera)?
23
23 What if the sensor was directed (e.g., a camera)?
24
24 What if the sensor was directed (e.g., a camera)? At this point, it would have made sense to turn a bit more to see more of the unknown map
25
A NOTHER E XAMPLE ? Locked? Key?
26
A CTIVE D ISCOVERY OF H IDDEN I NTENT No motion Perpendicular motion
27
M AIN A PPROACHES TO P ARTIAL O BSERVABILITY Ignore and react Doesn’t know what it doesn’t know Model and react Knows what it doesn’t know Model and predict Knows what it doesn’t know AND what it will/won’t know in the future Better decisions More components in implementation Harder computationally
28
U NCERTAINTY MODELS Model #1: Nondeterministic uncertainty f(x,u) -> a set of possible successors Model #2: Probabilistic uncertainty P(x’|x,u): a probability distribution over successors x’, given state x, control u Markov assumption
29
N ONDETERMINISTIC U NCERTAINTY : R EASONING WITH SETS x’ = x + e, [-a,a] t=0 t=1 -aa t=2 -2a2a Belief State: x(t) [-ta,ta]
30
U NCERTAINTY WITH S ENSING Plan = policy (mapping from states to actions) Policy achieves goal for every possible sensor result in belief state Move Sense 12 34 Observations should be chosen wisely to keep branching factor low Outcomes Special case: fully observable state
31
T ARGET T RACKING target robot The robot must keep a target in its field of view The robot has a prior map of the obstacles But it does not know the target’s trajectory in advance
32
T ARGET -T RACKING E XAMPLE Time is discretized into small steps of unit duration At each time step, each of the two agents moves by at most one increment along a single axis The two moves are simultaneous The robot senses the new position of the target at each step The target is not influenced by the robot (non-adversarial, non- cooperative target) target robot
33
T IME -S TAMPED S TATES ( NO CYCLES POSSIBLE ) State = (robot-position, target-position, time) In each state, the robot can execute 5 possible actions : {stop, up, down, right, left} Each action has 5 possible outcomes (one for each possible action of the target), with some probability distribution [Potential collisions are ignored for simplifying the presentation] ([i,j], [u,v], t) ([i+1,j], [u,v], t+1) ([i+1,j], [u-1,v], t+1) ([i+1,j], [u+1,v], t+1) ([i+1,j], [u,v-1], t+1) ([i+1,j], [u,v+1], t+1) right
34
R EWARDS AND C OSTS The robot must keep seeing the target as long as possible Each state where it does not see the target is terminal The reward collected in every non-terminal state is 1; it is 0 in each terminal state [ The sum of the rewards collected in an execution run is exactly the amount of time the robot sees the target] No cost for moving vs. not moving
35
E XPANDING THE STATE / ACTION TREE... horizon 1 horizon h
36
A SSIGNING REWARDS... horizon 1 horizon h Terminal states: states where the target is not visible Rewards: 1 in non-terminal states; 0 in others But how to estimate the utility of a leaf at horizon h?
37
E STIMATING THE UTILITY OF A LEAF... horizon 1 horizon h Compute the shortest distance d for the target to escape the robot’s current field of view If the maximal velocity v of the target is known, estimate the utility of the state to d/v [conservative estimate] target robot d
38
S ELECTING THE NEXT ACTION... horizon 1 horizon h Compute the optimal policy over the state/action tree using estimated utilities at leaf nodes Execute only the first step of this policy Repeat everything again at t+1… (sliding horizon)
39
P URE V ISUAL S ERVOING
40
Computing and Using a Policy
41
N EXT WEEK More algorithm details
42
F INAL INFORMATION Final presentations 20 minutes + 10 minutes questions Final reports due 5/3 Must include a technical report Introduction Background Methods Results Conclusion May include auxiliary website with figures / examples / implementation details I will not review code
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.