Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
Registration for Robotics Kurt Konolige Willow Garage Stanford University Patrick Mihelich JD Chen James Bowman Helen Oleynikova Freiburg TORO group: Giorgio.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Hybrid Position-Based Visual Servoing
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
(Includes references to Brian Clipp
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Shape From Light Field meets Robust PCA
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Video Processing EN292 Class Project By Anat Kaspi.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Probabilistic Robotics
Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Weighted Range Sensor Matching Algorithms for Mobile Robot Displacement Estimation Sam Pfister, Kristo Kriechbaum, Stergios Roumeliotis, Joel Burdick Mechanical.
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
ROBOT MAPPING AND EKF SLAM
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Kalman filter and SLAM problem
Real-time Dense Visual Odometry for Quadrocopters Christian Kerl
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
/09/dji-phantom-crashes-into- canadian-lake/
David Wheeler Kyle Ingersoll EcEn 670 December 5, 2013 A Comparison between Analytical and Simulated Results The Kalman Filter: A Study of Covariances.
3D SLAM for Omni-directional Camera
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Young Ki Baik, Computer Vision Lab.
Karman filter and attitude estimation Lin Zhong ELEC424, Fall 2010.
IMPROVE THE INNOVATION Today: High Performance Inertial Measurement Systems LI.COM.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
MTP FY03 Year End Review – Oct 20-24, Visual Odometry Yang Cheng Machine Vision Group Section 348 Phone:
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
State Estimation for Autonomous Vehicles
Fast Semi-Direct Monocular Visual Odometry
Visual Odometry David Nister, CVPR 2004
Extended Kalman Filter
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
A Low-Cost and Fail-Safe Inertial Navigation System for Airplanes Robotics 전자공학과 깡돌가
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
G. Casalino, E. Zereik, E. Simetti, A. Turetta, S. Torelli and A. Sperindè EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
SLAM : Simultaneous Localization and Mapping
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Using Sensor Data Effectively
ASEN 5070: Statistical Orbit Determination I Fall 2014
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Simultaneous Localization and Mapping
Anastasios I. Mourikis and Stergios I. Roumeliotis
Autonomous Cyber-Physical Systems: Sensing
Robot Teknolojisine Giriş Yrd. Doç. Dr. Erkan Uslu, Doç. Dr
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Closing the Gaps in Inertial Motion Tracking
Bayes and Kalman Filter
Presentation transcript:

Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek State Estimation of Underwater Robot Using Visual and Inertial Information Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek

Abstract: This paper discusses vision and inertial-based state estimation algorithm for an underwater robot. In order to estimate the 6 DOF pose of vehicle it combines information from: a)IMU (Linear accelerations and Angular velocities). b) Pressure Sensor(Collects depth measurements) c)Features from monocular Camera. 3. Validation-- Experimental results are presented on field trials conducted in underwater environment.

Literature Review: Mourikis and Roumeliotis: Vision aided inertial navigation for estimating the 3DOF pose of vehicle and the results of their work demonstrates estimated trajectories with errors of 0.32% of 3.2 km trajectory. Jones and Soatto: In his recent work reports an errors of 0.5% of 30 km trajectory with very low drift in IMU for given length of trajectory and another significance of his work is it auto calibrates IMU and camera calibration and also gravity vector--which are sources for bias. Corke et. al: They used stereo visual odometry and Harris corner features. They used only 10 to 50 features after outlier rejection for stereo reconstruction of 3D trajectory. Eustice et. al: They used sparse information matrix to combine visual and inertial data to perform for mapping the surface on RMS Titanic. There are other methods for pose estimation: Doppler or beacon based. The work presented in this paper made use of passive sensors which are energy efficient.

Algorithm: Propagation: For each IMU measurement received, propagate the filter state and Covariance. Image registration: Every time a new image is recorded, 1. Augment the state and covariance matrix with a copy of the current camera pose estimate. 2. image processing module begins operation. Depth Sensor based update: update is done every time before the augmenting camera pose estimate to the state vector. Vision based Update: When the feature measurements of a given image becomes available, perform EKF update(only when the observed feature is stopped being tracked).

Overview of Algorithm:

EKF State Vector Propagation IMU state vector: Error in quaternion is given as: State Vector at time tk: Propagation Gyroscope and accelerometer measurements are given by:

Estimates of linear acceleration and angular velocity: As the IMU measurements are continuous, in order to use at discrete time steps we have to discretize using 4th order Runge kutta integrator and for quaternion zero order quaternion integrator.

3D Feature Position Estimation: State Augmentation: At every new image camera pose estimate is computed from IMU pose estimate. Camera pose estimate is appended to state vector and Covariance matrix is modified. 3D Feature Position Estimation: Feature f is tracked in n consecutive frames, C1,C2,...,Cn. Position of feature f with respect to camera frame Ci is: Measurement model considering the case of single feature f which is observed from set of n camera poses. Each n observation of the feature is described by the model: =

If the feature f is first appeared in C1 camera frame then the position of feature in Ci frame is given as: where By substituting the above equation in measurement model we get:

Given the measurements and estimates of camera poses in state vector we can get the estimates of using nonlinear least square minimisation. Then, the global feature position is computed by: Vision Based Update: After getting the estimated values of we can get an estimate and actual measurement is obtained by tracking features with these two the residual is computed as : which is in nonlinear form it is linearised by taking its taylor expansion and linearized residual is: By stacking the residuals of all frames for feature f we get:

As the obtained residual is not in the form used in EKF it is multiplied by transpose of matrix A where A is an orthonormal basis for the null space of Uf matrix i.e Tranpose(A)*Uf=0. Therefore residual becomes Residuals of all the features are stacked we get Depth Sensor Update: Pressure sensor gets the hydrostatic pressure. Reference point is taken as surface of sea. Pressure measurements are converted into depth measurements in linear fashion. Measurement model at time k is: Difference between initial depth and current depth is calculated : EKF state estimate for depth is calculated as residual Covariance matrix of uncertainty in residual:

Experimental Results: Kalman gain is computed to correct state vector and covariance matrix. Correction in state vector= Update of covariance matrix is: Correction to quaternion: Experimental Results: They collected underwater datasets with ground truth from the waters of the island of Barbados,and performed offline experiments to test their implementation. Two of them are presented here. Images on both datasets were recorded from the back camera at 15Hz with resolution 870 x 520. The IMU data is coming from a MicroStrain 3DM-GX1 MEMS unit, sampled at 50Hz.

The first dataset, depicted in figure below features a straight 30 meter long trajectory, where the robot moves at approximately 0.2 meters/sec forward, while preserving its depth. The sea bottom is mostly flat, and the robot moves about 2 meters over it. A white 30 meter-long tape has been placed on the bottom, both to provide ground truth for distance travelled, and to facilitate the robot’s guided straight line trajectory. The second dataset corresponds to an experiment that took place over the same site, and under the same conditions, the shape of the trajectory was a closed loop, as shown in figure above. The total length was approximately 33 meters.

The reconstruction of the straight line trajectory was very accurate with only 1 meter error over a 30 meter-long trajectory. In case of loop trajectory where the loop closure was approached within 3 meters, mainly due to errors in yaw, which is problematic for the gyroscope sensors in low-cost IMUs. Another factor that contributed to the loop results was the presence of motion blur in both datasets, which makes feature matching susceptible to errors. The inclusion of the depth sensor measurements improved the estimates of the position’s z- axis,provided that the depth-based updates were given more confidence than the accelerometer’s propagations