Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee

Slides:



Advertisements
Similar presentations
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Advertisements

The fundamental matrix F
N-view factorization and bundle adjustment CMPUT 613.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Introduction to Probabilistic Robot Mapping. What is Robot Mapping? General Definitions for robot mapping.
Computer Vision Optical Flow
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
1 Super Resolution Panospheric Imaging Trey Smith, Math for Robotics, Nov. 30, 1999 Panospheric Camera Can be thought of as a perspective camera with almost.
Stanford CS223B Computer Vision, Winter 2005 Lecture 11: Structure From Motion 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Multiple-view Reconstruction from Points and Lines
Computer Vision Structure from motion Marc Pollefeys COMP 256 Some slides and illustrations from J. Ponce, A. Zisserman, R. Hartley, Luc Van Gool, …
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Kalman filter and SLAM problem
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
A Bayesian Approach For 3D Reconstruction From a Single Image
SLAM (Simultaneously Localization and Mapping)
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
3D SLAM for Omni-directional Camera
(Wed) Young Ki Baik Computer Vision Lab.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
Young Ki Baik, Computer Vision Lab.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Affine Structure from Motion
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Ch. 3: Geometric Camera Calibration
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
EECS 274 Computer Vision Affine Structure from Motion.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Autonomous Navigation for Flying Robots Lecture 6.3: EKF Example
Visual Odometry David Nister, CVPR 2004
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
EECS 274 Computer Vision Projective Structure from Motion.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
SLAM : Simultaneous Localization and Mapping
University of Pennsylvania 1 GRASP Control of Multiple Autonomous Robot Systems Vijay Kumar Camillo Taylor Aveek Das Guilherme Pereira John Spletzer GRASP.
Reconstruction of a Scene with Multiple Linearly Moving Objects Mei Han and Takeo Kanade CISC 849.
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
A Closed Form Solution to Direct Motion Segmentation
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Simultaneous Localization and Mapping
Common Classification Tasks
Structure from motion Input: Output: (Tomasi and Kanade)
Vehicle Segmentation and Tracking in the Presence of Occlusions
Multiple View Geometry for Robotics
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Noah Snavely.
Computer Vision Stereo Vision.
Video Compass Jana Kosecka and Wei Zhang George Mason University
Probabilistic Map Based Localization
for Vision-Based Navigation
Two-view geometry.
Multi-view geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
Lecture 15: Structure from motion
INDOOR DENSE DEPTH MAP AT DRONE HOVERING
Presentation transcript:

On the Analysis of the Depth Error on the Road Plane for Monocular Vision-Based Robot Navigation Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee Samsung Techwin Robot Business Jingang Yi MAE Dept., Rutgers Univ.

INTRODUCTION Camera Explain scenario.

Structure from Motion (SfM) Hartley and Zisserman, 2003; Huang and Netravali, 1994; Aggarwal and Nandhakumar, 1988 Baseline

Imperfect World

Reconstruction Uncertainty

Depth Error Couples with Motion! In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.

Problem Definition Given threshold |eΔ| for depth uncertainty, predict un-trusted area Au where the depth error is beyond the threshold. In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.

Related Work: Vision-Based Navigation Visual-based Navigation No Hands Across America (1995) The ARGO project(1998) Autonomous Motorcycle (2007) Visual SLAM and Visual Odometry Mouragnon et al. 2006 Chen et al. 2006 Mortard et al. 2007; Lemaire and Lacroix 2007 Hayet et al. 2008

Structure from Motion Reduce correspondence error Other features Low-rank approximation, Tomasi and Kanade, 1992 Power factorization, Hartley and Schaffalizky, 2003 Closure constraint, Guilbert and Bartoli, 2003 Covariance weighted data, Anandan and Irani, 2002 Other features Planar parallax ,Irani et al., 2002, Bartoli and Sturm, 2003 Probability of correspondence points, Dellaert et al. 2000

Related Work: Active Vision Intelligent data acquisition process Survey Bajscy, 1988 Aloimonos et al. 1987, 1990 Maximizing camera visibility Li and Liu, 2005 Marchand and Chaumette, 1999 Minimizing 3D estimation error Chaumette et.al, 1996 Dunn and Olague, 2005 Bakhatari and benhabib,

Related Work: Scene Construction Error Analysis Estimation error Covariance matrix Matthies and Shafer, 1987, Sun and Ramesh, 2001 Standard deviation of depth Kim et. Al, 2005 Low boundary Young and Chellappa, 1992, Daniilidis and Nagel, 1993 Sensitivity map Xiong and Matthies, 1997

Assumptions Static or slow-moving obstacles Known camera parameters Uniform pixel correspondence error Iso-oriented cameras

Pin-hole Camera Model

Computing Depth

Computing Depth

Depth Error Range eΔ eΔ

Depth Error Range

Predicting Untrusted Area Au

Predicting Untrusted Area Au

Experiments The robot and the camera used in experiments.

Depth Error Range eΔ

Untrusted Area Au

Experiments: Application in Depth-Error-Aware Motion Planning (20cm, 0cm) (-6cm, 0cm) 2 3 1 (13cm, -50cm) (-20cm, -50cm)

Experiments Relative depth error (22) (28) (33) (33) (27) Image resolution

Experiments (46) (33) (26) Depth of the obstacles

Conclusion and Future Work Choose camera perspectives Mixed initiative planning Visual landmark selection in visual odometry Improve visual tracking performance for mobile robots In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.