Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian Filtering Dieter Fox Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception.

Similar presentations


Presentation on theme: "Bayesian Filtering Dieter Fox Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception."— Presentation transcript:

1

2 Bayesian Filtering Dieter Fox

3 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception = state estimation Control = utility optimization

4 Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:

5 Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors

6 Bayes Filters Bayes z = observation u = action x = state Markov Total prob.

7 Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

8 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery) “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91]

9 Bayes Filters for Robot Localization

10 Probabilistic Kinematics Odometry information is inherently noisy. x’ u p(x|u,x’) u x’

11 Proximity Measurement Measurement can be caused by … a known obstacle. cross-talk. an unexpected obstacle (people, furniture, …). missing all obstacles (total reflection, glass, …). Noise is due to uncertainty … in measuring distance to known obstacle. in position of known obstacles. in position of additional obstacles. whether obstacle is missed.

12 Mixture Density How can we determine the model parameters?

13 Raw Sensor Data Measured distances for expected distance of 300 cm. SonarLaser

14 Approximation Results Sonar Laser 300cm400cm

15 Representations for Bayesian Robot Localization Discrete approaches (’95) Topological representation (’95) uncertainty handling (POMDPs) occas. global localization, recovery Grid-based, metric representation (’96) global localization, recovery Multi-hypothesis (’00) multiple Kalman filters global localization, recovery Particle filters (’99) sample-based representation global localization, recovery Kalman filters (late-80s?) Gaussians approximately linear models position tracking AI Robotics

16 Discrete Grid Filters

17 Piecewise Constant Representation

18 Grid-based Localization

19 Sonars and Occupancy Grid Map

20 Tree-based Representation Idea: Represent density using a variant of Octrees

21 Tree-based Representations Efficient in space and time Multi-resolution

22 Particle Filters

23  Represent belief by random samples  Estimation of non-Gaussian, nonlinear processes  Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter  Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]  Computer vision: [Isard and Blake 96, 98]  Dynamic Bayesian Networks: [Kanazawa et al., 95]d Particle Filters

24 Weight samples: w = f / g Importance Sampling

25 draw x i t  1 from Bel (x t  1 ) draw x i t from p ( x t | x i t  1,u t  1 ) Importance factor for x i t : Particle Filter Algorithm

26 Particle Filters

27 Sensor Information: Importance Sampling

28 Robot Motion

29 Sensor Information: Importance Sampling

30 Robot Motion

31 Sample-based Localization (sonar)

32 Using Ceiling Maps for Localization [Dellaert et al. 99]

33 Vision-based Localization P(z|x) h(x) z

34 Under a Light Measurement z:P(z|x):

35 Next to a Light Measurement z:P(z|x):

36 Elsewhere Measurement z:P(z|x):

37 Global Localization Using Vision

38 Localization for AIBO robots

39 Adaptive Sampling

40 Idea: Assume we know the true belief. Represent this belief as a multinomial distribution. Determine number of samples such that we can guarantee that, with probability (1-  ), the KL-distance between the true posterior and the sample-based approximation is less than . Observation: For fixed  and , number of samples only depends on number k of bins with support: KLD-sampling

41 Example Run Sonar

42 Example Run Laser

43 Kalman Filters

44 Prediction Correction Bayes Filter Reminder

45 Gaussians --   Univariate  Multivariate

46 Gaussians and Linear Functions

47 Kalman Filter Updates in 1D

48 Kalman Filter Algorithm 1. Algorithm Kalman_filter(  t-1,  t-1, u t, z t ): 2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return  t,  t

49 Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions

50 Linearity Assumption Revisited

51 Non-linear Function

52 EKF Linearization (1)

53 EKF Linearization (2)

54 EKF Linearization (3)

55 Particle Filter Projection

56 Density Extraction

57 Sampling Variance

58 EKF Algorithm 1.Extended_Kalman_filter (  t-1,  t-1, u t, z t ): 2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return  t,  t

59 Landmark-based Localization

60 EKF Prediction Step

61 EKF Observation Prediction Step

62 EKF Correction Step

63 Estimation Sequence (1)

64 Estimation Sequence (2)

65 Comparison to GroundTruth

66 EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!

67 Linearization via Unscented Transform EKF UKF

68 UKF Sigma-Point Estimate (2) EKF UKF

69 UKF Sigma-Point Estimate (3) EKF UKF

70 Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance

71 UKF Prediction Step

72 UKF Observation Prediction Step

73 UKF Correction Step

74 EKF Correction Step

75 Estimation Sequence EKF PF UKF

76 Estimation Sequence EKF UKF

77 Prediction Quality EKF UKF

78 UKF Summary Highly efficient: Same complexity as EKF, with a constant factor slower in typical practical applications Better linearization than EKF: Accurate in first two terms of Taylor expansion (EKF only first term) Derivative-free: No Jacobians needed Still not optimal!

79 SLAM: Simultaneous Localization and Mapping

80 Mapping with Raw Odometry

81 SLAM: Simultaneous Localization and Mapping Full SLAM: Online SLAM: Integrations typically done one at a time

82 Map with N landmarks:(2N+3)-dimensional Gaussian Can handle hundreds of dimensions SLAM: Mapping with Kalman Filters

83

84

85 Map Correlation matrix

86 Graph-SLAM Full SLAM technique Generates probabilistic links Computes map only occasionally Based on Information Filter form

87 Graph-SLAM Idea

88 Robot Poses and Scans [Lu and Milios 1997] Successive robot poses connected by odometry Sensor readings yield constraints between poses Constraints represented by Gaussians Globally optimal estimate

89 Loop Closure Before loop closureAfter loop closure Use scan patches to detect loop closure Add new position constraints Deform the network based on covariances of matches

90 Efficient Map Recovery Minimize constraint function J GraphSLAM using standard optimization techniques (gradient descent, Levenberg Marquardt, conjugate gradient)

91 Mapping the Allen Center

92 Rao-Blackwellised Particle Filters

93 Rao-Blackwellized Mapping Compute a posterior over the map and possible trajectories of the robot : robot motionmap trajectory map and trajectory measurements

94 FastSLAM Robot Pose2 x 2 Kalman Filters Landmark 1Landmark 2Landmark N … x, y,  Landmark 1Landmark 2Landmark N … x, y,  Particle #1 Landmark 1Landmark 2Landmark N … x, y,  Particle #2 Landmark 1Landmark 2Landmark N … x, y,  Particle #3 Particle M … [Begin courtesy of Mike Montemerlo]

95 FastSLAM – Simulation Up to 100,000 landmarks 100 particles 10 3 times fewer parameters than EKF SLAM Blue line = true robot path Red line = estimated robot path Black dashed line = odometry

96 Victoria Park Results 4 km traverse 100 particles Uses negative evidence to remove spurious landmarks Blue path = odometry Red path = estimated path [End courtesy of Mike Montemerlo]

97 Motion Model for Scan Matching Raw Odometry Scan Matching

98 Rao-Blackwellized Mapping with Scan-Matching Map: Intel Research Lab Seattle Loop Closure

99 Rao-Blackwellized Mapping with Scan-Matching Map: Intel Research Lab Seattle Loop Closure

100 Rao-Blackwellized Mapping with Scan-Matching Map: Intel Research Lab Seattle

101 Example (Intel Lab)  15 particles  four times faster than real-time P4, 2.8GHz  5cm resolution during scan matching  1cm resolution in final map joint work with Giorgio Grisetti

102 Outdoor Campus Map  30 particles  250x250m 2  1.75 km (odometry)  20cm resolution during scan matching  30cm resolution in final map joint work with Giorgio Grisetti  30 particles  250x250m 2  1.088 miles (odometry)  20cm resolution during scan matching  30cm resolution in final map

103 DP-SLAM [Eliazar & Parr] Runs at real-time speed on 2.4GHz Pentium 4 at 10cm/s scale: 3cm

104 Consistency

105 Results obtained with DP-SLAM 2.0 (offline) Eliazar & Parr, 04

106 Close up End courtesy of Eliazar & Parr

107 Fast-SLAM Summary Full and online version of SLAM Factorizes posterior into robot trajectories (particles) and map (EKFs). Landmark locations are independent! More efficient proposal distribution through Kalman filter prediction Data association per particle

108 Ball Tracking in RoboCup  Extremely noisy (nonlinear) motion of observer  Inaccurate sensing, limited processing power  Interactions between target and environment  Interactions between robot(s) and target Goal: Unified framework for modeling the ball and its interactions.

109 Tracking Techniques  Kalman Filter  Highly efficient, robust (even for nonlinear)  Uni-modal, limited handling of nonlinearities  Particle Filter  Less efficient, highly robust  Multi-modal, nonlinear, non-Gaussian  Rao-Blackwellised Particle Filter, MHT  Combines PF with KF  Multi-modal, highly efficient

110 Dynamic Bayes Network for Ball Tracking k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

111 Robot Location k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

112 Robot and Ball Location (and velocity) k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

113 Ball-Environment Interactions NoneGrabbed Bounced Kicked Deflected

114 Ball-Environment Interactions NoneGrabbed Bounced Kicked (0.8) Robot loses grab (0.2) ( 1. 0 ) R o b o t k i c k s b a l l ( 0. 9 ) ( 1. 0 ) Within grab range and robot grabs (prob.from model) ( 0. 1 ) Kick fails(0.1) Deflected (residual prob.) ( 1. 0 ) C o l l i s i o n w i t h o b j e c t s o n m a p ( 1. 0 )

115 Integrating Discrete Ball Motion Mode k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

116 Grab Example (1) k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

117 Grab Example (2) k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

118 Inference: Posterior Estimation k-2 b k-1 b k b r k-2 r k-1 r k z k-2 z k-1 z k u k-2 u k-1 z z k z k-2 k m k-1 m k-2 m Ball observation Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l l l b b b

119 Rao-Blackwellised PF for Inference  Represent posterior by random samples  Each sample contains robot location, ball mode, ball Kalman filter  Generate individual components of a particle stepwise using the factorization

120 Rao-Blackwellised Particle Filter for Inference k-1 b k b r r k k m m Ball location and velocity Ball motion mode Map and robot location Ball tracking Robot localization  Draw a sample from the previous sample set:

121 Generate Robot Location r k-1 r k u z k Map and robot location Robot control Landmark detection Robot localization l k-1 b m Ball location and velocity Ball motion mode Ball tracking k b k m

122 Generate Ball Motion Model k-1 b r r k u z k k m m Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l k b

123 Update Ball Location and Velocity k-1 b r r k u z k k m m Ball location and velocity Ball motion mode Map and robot location Robot control Landmark detection Ball tracking Robot localization l k b z k b

124 Importance Resampling  Weight sample by if observation is landmark detection and by if observation is ball detection.  Resample

125 Ball-Environment Interaction

126

127 Tracking and Finding the Ball  Cluster ball samples by discretizing pan / tilt angles  Uses negative information

128 Experiment: Real Robot  Robot kicks ball 100 times, tries to find it afterwards  Finds ball in 1.5 seconds on average

129 Reference * Observations Reference * Observations Simulation Runs

130 Comparison to KF* (optimized for straight motion) RBPF KF* Reference * Observations RBPF KF* Reference * Observations

131 RBPF KF’ Reference * Observations RBPF KF’ Reference * Observations Comparison to KF ’ (inflated prediction noise)

132 Orientation Errors 0 20 40 60 80 100 120 140 160 180 234567891011 Orientation Error [degrees] Time [sec] RBPF KF*

133 Conclusions  Bayesian filters are the most successful technique in robotics (vision?)  Many instances (Kalman, particle, grid, MHT, RBPF, … )  Special case of dynamic Bayesian networks  Recently: hierarchical models


Download ppt "Bayesian Filtering Dieter Fox Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception."

Similar presentations


Ads by Google