Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey 08530.

Slides:



Advertisements
Similar presentations
Zürich Autonomous Systems Lab Rainer Voigt 28. September 2011.
Advertisements

The fundamental matrix F
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Luis Mejias, Srikanth Saripalli, Pascual Campoy and Gaurav Sukhatme.
Introduction To Tracking
Parallel Tracking and Mapping for Small AR Workspaces Vision Seminar
(Includes references to Brian Clipp
Vision Based Control Motion Matt Baker Kevin VanDyke.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
1 Long-term image-based motion estimation Dennis Strelow.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Visual Odometry Michael Adams CS 223B Problem: Measure trajectory of a mobile platform using visual data Mobile Platform (Car) Calibrated Camera.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Visual Odometry for Vehicles in Urban Environments CS223B Computer Vision, Winter 2008 Team 3: David Hopkins, Christine Paulson, Justin Schauer.
Visual Odometry Chris Moore Mark Huetsch Firouzeh Jalilian.
Vision Guided Robotics
Firefighter Indoor Navigation using Distributed SLAM (FINDS) Major Qualifying Project Matthew Zubiel Nick Long Advisers: Prof. Duckworth, Prof. Cyganski.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Human tracking and counting using the KINECT range sensor based on Adaboost and Kalman Filter ISVC 2013.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
SLAM (Simultaneously Localization and Mapping)
3D Fingertip and Palm Tracking in Depth Image Sequences
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
A General Framework for Tracking Multiple People from a Moving Camera
3D SLAM for Omni-directional Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 2.3: 2D Robot Example Jürgen Sturm Technische Universität München.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
Visual Odometry in a 2-D environment CS-365A Course Project BY: Aakriti Mittal (12005) Keerti Anand (13344) Under the guidance of: Prof. Amitabha Mukherjee.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Young Ki Baik, Computer Vision Lab.
Stereo Object Detection and Tracking Using Clustering and Bayesian Filtering Texas Tech University 2011 NSF Research Experiences for Undergraduates Site.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
MTP FY03 Year End Review – Oct 20-24, Visual Odometry Yang Cheng Machine Vision Group Section 348 Phone:
Robust Object Tracking by Hierarchical Association of Detection Responses Present by fakewen.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
Using Adaptive Tracking To Classify And Monitor Activities In A Site W.E.L. Grimson, C. Stauffer, R. Romano, L. Lee.
Fast Semi-Direct Monocular Visual Odometry
Visual Odometry David Nister, CVPR 2004
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
G. Casalino, E. Zereik, E. Simetti, A. Turetta, S. Torelli and A. Sperindè EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
Signal and Image Processing Lab
Paper – Stephen Se, David Lowe, Jim Little
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Anastasios I. Mourikis and Stergios I. Roumeliotis
Vehicle Segmentation and Tracking in the Presence of Occlusions
Object tracking in video scenes Object tracking in video scenes
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Vision Tracking System
Presentation transcript:

Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey 08530

Basic idea  Estimation of motion based on video input alone  No prior knowledge of scene or motion  Real time operation with low delay  Front end: feature tracker  Point features matched between pairs of frames  Can be used in conjunction with other sensors

Introduction  Effective use of video sensors has been a goal for many years  Recent advances have made real-time vision processing practical  Speed and latency constraints

Related works  Moravec’s work in 1970  Stereo visual odometry used on Mars early 2004  Camera motion estimation based on feature tracks  More closely related to Davison; Chiuso,Favaro & Soatto

Results obtained using this system Left: Single camera Right: Stereo pair

Feature Detection

Finding corner strength

Feature Detection Four Sweeps to Calculate Compute, by filters and. Calculate the horizontal sum by filter. Calculate the vertical sum by filter. Calculate corner strength.

Selecting corner points

Feature Detection Detected Feature Points Superimposed feature tracks through images

Feature Matching Two Directional Matching Calculate the normalized correlation in reign, where, are two consecutive input images. Match the feature points in the circular area that have the maximum correlation in two directions.

Robust Estimation The Monocular Scheme Separate the matching points into 5-points groups. Treat each group as a 5-point relative pose problem. Use RANSAC to select well matched groups. Estimate camera motion us- ing the selected groups. Put the current estimation in to the coordinate of the previ- ous one.

Robust Estimation The Stereo Scheme Match the feature points in stereo images, then triangulate them into 3D points. Estimation the camera motion using RANSAC and the 3D points in consecutive frames.

Experiments Different Platforms

Experiments Speed and Accuracy

Experiments Visual Odometry vs. Differential GPS

Experiments Visual Odometry vs. Inertial Navigation System (INS)

Experiments Visual Odometry vs. Wheel Recorder

Conclusion and Future Work Conclusion A real-time ego motion estimation system. Work both on monocular camera and stereo head. Results are accurate and robust. Future Work Integrate visual odometry with Kalman filter. Use sampling methods with multimodal distributions.