Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee

Similar presentations


Presentation on theme: "Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee"— Presentation transcript:

1 On the Analysis of the Depth Error on the Road Plane for Monocular Vision-Based Robot Navigation
Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee Samsung Techwin Robot Business Jingang Yi MAE Dept., Rutgers Univ.

2 INTRODUCTION Camera Explain scenario.

3 Structure from Motion (SfM)
Hartley and Zisserman, 2003; Huang and Netravali, 1994; Aggarwal and Nandhakumar, 1988 Baseline

4 Imperfect World

5 Reconstruction Uncertainty

6 Depth Error Couples with Motion!
In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.

7 Problem Definition Given threshold |eΔ| for depth uncertainty,
predict un-trusted area Au where the depth error is beyond the threshold. In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.

8 Related Work: Vision-Based Navigation
Visual-based Navigation No Hands Across America (1995) The ARGO project(1998) Autonomous Motorcycle (2007) Visual SLAM and Visual Odometry Mouragnon et al. 2006 Chen et al. 2006 Mortard et al. 2007; Lemaire and Lacroix 2007 Hayet et al. 2008

9 Structure from Motion Reduce correspondence error Other features
Low-rank approximation, Tomasi and Kanade, 1992 Power factorization, Hartley and Schaffalizky, 2003 Closure constraint, Guilbert and Bartoli, 2003 Covariance weighted data, Anandan and Irani, 2002 Other features Planar parallax ,Irani et al., 2002, Bartoli and Sturm, 2003 Probability of correspondence points, Dellaert et al. 2000

10 Related Work: Active Vision
Intelligent data acquisition process Survey Bajscy, 1988 Aloimonos et al. 1987, 1990 Maximizing camera visibility Li and Liu, 2005 Marchand and Chaumette, 1999 Minimizing 3D estimation error Chaumette et.al, 1996 Dunn and Olague, 2005 Bakhatari and benhabib,

11 Related Work: Scene Construction Error Analysis
Estimation error Covariance matrix Matthies and Shafer, 1987, Sun and Ramesh, 2001 Standard deviation of depth Kim et. Al, 2005 Low boundary Young and Chellappa, 1992, Daniilidis and Nagel, 1993 Sensitivity map Xiong and Matthies, 1997

12 Assumptions Static or slow-moving obstacles Known camera parameters
Uniform pixel correspondence error Iso-oriented cameras

13 Pin-hole Camera Model

14 Computing Depth

15 Computing Depth

16 Depth Error Range eΔ

17 Depth Error Range

18 Predicting Untrusted Area Au

19 Predicting Untrusted Area Au

20 Experiments The robot and the camera used in experiments.

21 Depth Error Range eΔ

22 Untrusted Area Au

23 Experiments: Application in Depth-Error-Aware Motion Planning
(20cm, 0cm) (-6cm, 0cm) 2 3 1 (13cm, -50cm) (-20cm, -50cm)

24 Experiments Relative depth error (22) (28) (33) (33) (27)
Image resolution

25 Experiments (46) (33) (26) Depth of the obstacles

26 Conclusion and Future Work
Choose camera perspectives Mixed initiative planning Visual landmark selection in visual odometry Improve visual tracking performance for mobile robots In a conventional approach, robot makes its motion to the second view position and take the image. Then it perform 3D reconstruction to obtain obstacle distribution. However, if motion is made and the second camera center is fixed! There will no control of depth uncertainty! The robot could hit the obstacle because 3D reconstruction quality cannot be guaranteed! To decouple the two , we need a model that can predict depth error uncertain as a function of potential second view configurations. Therefore, it will become possible to actively reduce the depth uncertainty even before the actual 3D reconstruction.


Download ppt "Dezhen Song CSE Dept., Texas A&M Univ. Hyunnam Lee"

Similar presentations


Ads by Google