Download presentation
Presentation is loading. Please wait.
Published byAgus Sumadi Modified over 6 years ago
1
Robot Teknolojisine Giriş Yrd. Doç. Dr. Erkan Uslu, Doç. Dr
Robot Teknolojisine Giriş Yrd. Doç. Dr. Erkan Uslu, Doç. Dr. Sırma Yavuz, Doç. Dr. Fatih Amasyalı, Ar. Grv. Nihal Altuntaş, Ar. Grv. Furkan Çakmak
2
Dead (deduced) Reckoning
Estimating the entire trajectory of a moving robot Dead-reckoning systems are based on estimating position relative to sensors. These sensors can be accelerometers, gyroscopes, wheel encoders, infrared sensors, cameras etc. Odometry is a sub topic of dead-reckoning and based on wheel displacement calculations
3
Dead (deduced) Reckoning
Kinematic Model Open loop motion model, position update with given linear/angular velocity Wheel Odometry Position update by wheel encoders Laser Scan Matcher Position update with matching consecutive laser scans Visual Odometry Position update with matching consecutive images Inertial Odometry Position update with motion (accelerometer), rotation (gyroscope) and heading (magnetometer) sensors Combined Odometry Position update with combined sources of odometries and sensors
4
Odometry Errors different wheel diameters ideal case and many more …
bump carpet
5
Kinematic Model (Velocity Model)
𝑅= 𝐿 𝑉 𝐿 + 𝑉 𝑅 2 𝑉 𝐿 − 𝑉 𝑅 , 𝑉= 𝑉 𝑅 + 𝑉 𝐿 2 , 𝜔= 𝑉 𝑅 − 𝑉 𝐿 𝐿 𝐼𝐶 𝑅 𝑥 𝐼 𝐶𝑅 𝑦 = 𝑥−𝑅𝑠𝑖𝑛 𝜃 𝑦+𝑅𝑐𝑜𝑠 𝜃 𝑃 ′ = 𝑥 ′ 𝑦 ′ 𝜃 ′ = cos 𝜔.Δ𝑡 −sin 𝜔.Δ𝑡 0 sin 𝜔.Δ𝑡 cos 𝜔.Δ𝑡 𝑥−𝐼𝐶 𝑅 𝑥 𝑦−𝐼𝐶 𝑅 𝑦 𝜃 + 𝐼𝐶 𝑅 𝑥 𝐼𝐶 𝑅 𝑦 𝜔.Δ𝑡
6
Kinematic Model (Velocity Model)
𝑥 𝑡 = 0 𝑡 𝑉 𝜏 cos 𝜃 𝜏 ⅆ𝜏 𝑦 𝑡 = 0 𝑡 𝑉 𝜏 sin 𝜃 𝜏 ⅆ𝜏 𝜃 𝑡 = 0 𝑡 𝜔 𝜏 ⅆ𝜏
7
Kinematic Model (Velocity Model)
𝑥 𝑡 = 𝑡 𝑉 𝑅 𝜏 + 𝑉 𝐿 𝜏 cos 𝜃 𝜏 ⅆ𝜏 𝑦 𝑡 = 𝑡 𝑉 𝑅 𝜏 + 𝑉 𝐿 𝜏 sin 𝜃 𝜏 ⅆ𝜏 𝜃 𝑡 = 1 𝐿 0 𝑡 𝑉 𝑅 𝜏 − 𝑉 𝐿 𝜏 𝜏
8
Wheel Odometry Odometry is obtained integrating sensor reading from wheel encoders Pose estimation is carried out similar to kinematic model, except that distances traveled by the wheels are calculated from sensor reading
9
Wheel Odometry If a robot starts from a position 𝑝, and the right and left wheels move respective distances Δ 𝑠 𝑟 and Δ 𝑠 𝑙 , what is the resulting new position 𝑝’?
10
Wheel Odometry Let’s model the change in angle Δ𝜃 and distance travelled Δ𝑠 by the robot Δ 𝑠 𝑙 = 𝑅𝛼 Δ 𝑠 𝑟 = (𝑅+2𝐿)𝛼 Δ𝑠 = (𝑅+𝐿)𝛼 Δ𝑠= (Δ 𝑠 𝑟 +Δ 𝑠 𝑙 ) 2
11
Wheel Odometry Δ𝜃 = 𝛼 𝑅= 2𝐿Δ 𝑠 𝑙 Δ 𝑠 𝑟 − Δ 𝑠 𝑙 𝛼 = (Δ 𝑠 𝑟 − Δ 𝑠 𝑙 ) 2𝐿
𝑅= 2𝐿Δ 𝑠 𝑙 Δ 𝑠 𝑟 − Δ 𝑠 𝑙 𝛼 = (Δ 𝑠 𝑟 − Δ 𝑠 𝑙 ) 2𝐿 Δ𝜃 = (Δ 𝑠 𝑟 − Δ 𝑠 𝑙 ) 2𝐿
12
Wheel Odometry Average orientation is 𝜃+ Δ𝜃 2 Δ𝑥 = Δ𝑑 cos 𝜃 + Δ𝜃 2
Δ𝑦 = Δ𝑑 sin 𝜃 + Δ𝜃 2 Δ𝑑 ≈ Δ𝑠 Δ𝑥 = Δ𝑠 cos 𝜃 + Δ𝜃 2 Δ𝑦 = Δ𝑠 sin 𝜃 + Δ𝜃 2
13
Laser Scan Matcher Scan-matching tries to incrementally align two scans or a map to a scan, without revising the past/map Types Iterative Closest Point Scan-to-scan Scan-to-map Map-to-map Feature based
14
Laser Scan Matcher ICP (iterative closest point) : verilen 𝑝 𝑖 noktalar kümesinden referans 𝑆 𝑟𝑒𝑓 yüzeyine hangi 𝑞= 𝑡,𝜃 roto-translasyon (dönüş 𝑅 𝜃 ve öteleme 𝑡 ) ile ulaşılabileceğini yinelemeli olarak hesaplayan bir yöntemdir 𝑝 𝑖 noktalar kümesi için 𝑞 roto-translasyon 𝑝⨁𝑞=𝑝⨁ 𝑡,𝜃 ≜𝑅 𝜃 𝑝+𝑡 𝑞 ile dönüştürülmüş 𝑝 𝑖 noktalar kümesinin 𝑆 𝑟𝑒𝑓 ’te Öklid izdüşüm karşılıkları olan noktalar ile olan uzaklıklarını 𝑞 üzerinden en küçüklemeye çalışmaktadır
15
Laser Scan Matcher ICP için en küçüklenecek kısıt denklemi
min 𝑞 𝑖 𝑝 𝑖 ⨁𝑞−𝚷 𝑆 𝑟𝑒𝑓 , 𝑝 𝑖 ⨁𝑞 2 𝚷 𝑆 𝑟𝑒𝑓 ,𝑝 ile 𝑆 𝑟𝑒𝑓 üzerine Öklid izdüşüm 𝑞 0 ilk dönüşüm durumundan hareketle ICP için yinelemeli kısıt denklemi min 𝑞 𝑘+1 𝑖 𝑝 𝑖 ⨁ 𝑞 𝑘+1 −𝚷 𝑆 𝑟𝑒𝑓 , 𝑝 𝑖 ⨁ 𝑞 𝑘 2
16
Visual Odometry A camera (or an array of cameras) rigidly attached to a robot Construct a 6-DOF trajectory using the video stream coming from this camera(s) Using just one camera Monocular Visual Odometry Using two (or more) cameras Stereo Visual Odometry In monocular VO, trajectory estimation can be given in scale factor In stereo VO, exact trajectory estimation can be carried out (in meters) If (distance to objects)/(stereo distance) >> degenerates to monocular VO
17
Visual Odometry – Stereo VO - Algorithm
Input : 𝐼 ℓ 𝑡 , 𝐼 𝑟 𝑡 , 𝐼 ℓ 𝑡+1 , 𝐼 𝑟 𝑡+1 𝑅𝑒𝑐𝑡𝑖𝑓𝑦 𝐼 ℓ 𝑡 , 𝐼 𝑟 𝑡 , 𝐼 ℓ 𝑡+1 , 𝐼 𝑟 𝑡+1 𝐷 𝑡 =𝐷𝑖𝑝𝑎𝑟𝑖𝑡𝑦𝑀𝑎𝑝 𝐼 ℓ 𝑡 , 𝐼 𝑟 𝑡 , 𝐷 𝑡+1 =𝐷𝑖𝑝𝑎𝑟𝑖𝑡𝑦𝑀𝑎𝑝 𝐼 ℓ 𝑡+1 , 𝐼 𝑟 𝑡+1 𝐹 𝑡 =𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝐷𝑒𝑡𝑒𝑐𝑡𝑀𝑎𝑡𝑐ℎ 𝐼 ℓ 𝑡 , 𝐹 𝑡+1 =𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝐷𝑒𝑡𝑒𝑐𝑡𝑀𝑎𝑡𝑐ℎ 𝐼 ℓ 𝑡+1 𝑊 𝑡 =3𝐷𝑃𝑜𝑠𝑒(𝐷 𝑡 , 𝐹 𝑡 ), 𝑊 𝑡+1 =3𝐷𝑃𝑜𝑠𝑒(𝐷 𝑡+1 , 𝐹 𝑡+1 ) 𝑆=𝑆𝑢𝑏𝑠𝑒𝑡𝑆𝑒𝑙𝑒𝑐𝑡 𝑊 𝑡 , 𝑊 𝑡+1 𝑅,𝑡 =𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑅𝑡 𝑆 Output : 𝑅,𝑡
18
Visual Odometry – Stereo VO - Rectification
Two cameras are used Intrinsic callibration (lens distortion, imaging plane orienteation) Extrinsic callibration (position & orientation of two cameras with respect to a global frame)
19
Visual Odometry – Stereo VO – Disparity Map
20
Visual Odometry – Stereo VO – Feature Detection
Features from accelerated segment test (FAST) A corner point has either of the following at 16 neighborhood circle Set of N contiguous pixels that are brighter than (center + threshold) Set of N contiguous pixels that are darker than (center – threshold)
21
Inertial Odometry Using data provided by the inertial measurement unit (IMU) to continuously infer the relative change in position, velocity and orientation with respect to a starting point Gyroscope data on angular velocity 𝜃 𝑡+1 = 𝜃 𝑡 + 𝜔 𝑡 𝑑𝑡 Accelerometer data of linear acceleration (g included) 𝑣 𝑡+1 = 𝑣 𝑡 + 𝑎 𝑡 𝑑𝑡 𝑝 𝑡+1 = 𝑝 𝑡 + 𝑣 𝑡 𝑑𝑡
22
Inertial Odometry - Pitfalls
Integration drift : Small errors in the measurements of acceleration and angular velocity cause progressively larger errors in velocity—and even greater errors in position Gravity : The dominating component in the accelerometer data is gravity, even slight errors in orientation make the gravity ‘leak’ into the estimates The sequential nature of the problem makes the errors accumulate
23
Modeling Uncertainty in Motion
Δ𝑥 = Δ𝑠 cos 𝜃 + Δ𝜃 2 Δ𝑦 = Δ𝑠 sin 𝜃 + Δ𝜃 2 If Δ𝑠 = 1 + 𝑒 𝑠 and Δ𝜃 = 0 + 𝑒 𝜃 with 𝑒 𝑠 and 𝑒 𝜃 error terms, 𝑒 𝑠 =0.001𝑚 produces errors in the direction of motion 𝑒 𝜃 =2𝑑𝑒𝑔 results in Δ𝑥=0.9998, Δ𝑦= (supposed to be Δ𝑥 =1, Δ𝑦 =0) Modelling 𝑒 𝑠 and 𝑒 𝜃 with a probability distribution one can estimate the pose of the robot probabilistically
24
Modeling Uncertainty in Motion
25
Preliminary for Kalman F. and Extended K.F.
26
Gauss (Normal) Distribution
Gauss distribution can be described by 𝒩 𝜇, 𝜎 2 where 𝜇 is the mean and 𝜎 is the standart deviation, 𝜎 2 is called variance 𝑝 𝑥|𝜇,𝜎 = 1 𝜎 2𝜋 𝑒 − 𝑥−𝜇 𝜎 2 𝜇 𝜇- 𝜎 𝜇-+𝜎 34.1%
27
Estimating a Value Suppose there is constant value 𝑥 (distance to wall) Two different sensors measure this value First one observes 𝑧 1 with variance 𝜎 1 2 Second one observes 𝑧 2 with variance 𝜎 2 2 Can we merge these evidences?
28
Estimating a Value 𝑝 3 𝑥| 𝑧 1 ,𝑧 2 ~𝒩 𝜎 1 2 𝑧 2 + 𝜎 2 2 𝑧 1 𝜎 𝜎 2 2 , 𝜎 𝜎 2 2 𝑧 2 𝑧 2 + 𝜎 2 𝑧 1 𝑧 1 + 𝜎 1 𝑝 1 𝑥| 𝑧 1 𝑝 2 𝑥| 𝑧 2
29
Multivariate Gauss Distribution
Multivariate (d-variable) Gauss distribution can be described by 𝒩 𝝁,𝜮 where 𝝁 is the mean vector and 𝜮 is the covariance matrix 𝑝 𝒙 = 1 2𝜋 𝑑 2 𝜮 𝑒 − 𝒙−𝝁 𝑇 𝜮 −1 𝒙−𝝁
30
Expectations For a random variable 𝑥 the expected value is
𝐸 𝑥 = 𝑥 𝑝 𝑥 𝑑𝑥 𝐸 𝑥 = 𝑥 𝑝 𝑥 Expected value of a vector 𝒙 is by component 𝐸 𝒙 = 𝐸 𝑥 1 , 𝐸 𝑥 2 ,…, 𝐸 𝑥 𝑑 𝑇 For uniform distribution 𝐸 𝑥 = 𝑥 = 1 𝑁 1 𝑁 𝑥 𝑖
31
Variance and Covariance
For uniform distribution 𝐸 𝑥−𝐸 𝑥 2 = 1 𝑁 1 𝑁 𝑥 𝑖 − 𝑥 2 Variance of a random variable 𝑥 is 𝜎 2 =𝐸 𝑥−𝐸 𝑥 2 Covarince matrix is 𝜮=𝐸 𝒙−𝐸 𝒙 𝒙−𝐸 𝒙 𝑇
32
Independent Variation
Generate 100 samples duples for x and y Gaussian random variables with 𝜎 𝑥 =1 and 𝜎 𝑦 =3 𝐶 𝑥𝑦 =
33
Dependent Variation c and d are random variables generated by c=x+y and d=x-y 𝐶 𝑥𝑦 = −7.93 −
34
Kalman Filter
35
Stocastic Models of Uncertain World
Actions u are uncertain Observations are uncertain 𝜀 𝑖 ~𝒩 0, 𝜎 𝑖 𝑥 𝑡 =𝐹( 𝑥 𝑡−1 , 𝑢 𝑡 ) 𝑧 𝑡 =𝐺( 𝑥 𝑡 ) ⟹ 𝑥 𝑡 =𝐹( 𝑥 𝑡−1 , 𝑢 𝑡 , 𝜀 1 ) 𝑧 𝑡 =𝐺( 𝑥 𝑡 , 𝜀 2 ) The state x is unobservable The sense vector z provides noisy information about x
36
Kalman Filter Kalman filter estimates 𝑝 𝑧 𝑡 | 𝑥 𝑡 𝑝 𝑥 𝑡 | 𝑢 𝑡 , 𝑥 𝑡−1
Kalman filter requires the following Next state probability 𝑝 𝑥 𝑡 | 𝑢 𝑡 , 𝑥 𝑡−1 is a linear function of its arguments with additive noise 𝑥 𝑡 = 𝐴 𝑡 𝑥 𝑡−1 + 𝐵 𝑡 𝑢 𝑡 + 𝜀 𝑡 Measurement probability 𝑝 𝑧 𝑡 | 𝑥 𝑡 must be linear in its argumnets with additive noise 𝑧 𝑡 = 𝐶 𝑡 𝑥 𝑡 + 𝛿 𝑡 Initial belief of 𝑥 0 must be normal distribution
38
Kalman Filter Prediction (ACT) based on previous estimate and odometry
Observation (SEE) with on-board sensors Measurement Prediction based on prediction and map Matching of observation and map Estimation position update
39
Kalman Filter – ACT – using motion model and its uncertainities
40
Kalman Filter – SEE – estimation of position based on perception and map
41
EKF The assumptions of linear state transitions and linear measurements with added Gaussian noise are rarely fulfilled in practice For example, a robot that moves with constant translational and rotational velocity typically moves on a circular trajectory, which cannot be described by linear next state transitions Extended Kalman filter (EKF) overcomes the linearity assumption
42
EKF The next state probability and the measurement probabilities are governed by nonlinear functions g and h 𝑥 𝑡 =𝑔 𝑥 𝑡−1 , 𝑢 𝑡 + 𝜀 𝑡 𝑧 𝑡 =ℎ 𝑥 𝑡 + 𝛿 𝑡 The extended Kalman filter (EKF) calculates an approximation to the true belief by Linearization via Taylor Expansion
43
Non-Linear transformation of Gaussian variable
EKF approximation
44
Combined Odometry Estimete robot pose based on partial pose measurements from different sources robot_pose_ekf ROS package uses Extended Kalman filter with 6D model to combine measurements from wheel odometry, IMU sensor and visual odometry
45
RPE – Laser Scan Matcher - No IMU
46
RPE – Laser Scan Matcher + IMU
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.