1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
1 Long-term image-based motion estimation Dennis Strelow.
Image alignment Image from
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere.
Computer Vision Optical Flow
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Motion from image and inertial measurements Dennis Strelow Carnegie Mellon University.
Robust and large-scale alignment Image from
Last Time Pinhole camera model, projection
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Motion from image and inertial measurements Dennis Strelow Honeywell Advanced Technology Lab.
Motion from image and inertial measurements (additional slides) Dennis Strelow Carnegie Mellon University.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Motion from image and inertial measurements Dennis Strelow Carnegie Mellon University.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
The plan for today Camera matrix
Image-Based Rendering using Hardware Accelerated Dynamic Textures Keith Yerex Dana Cobzas Martin Jagersand.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Motion from image and inertial measurements Dennis Strelow Carnegie Mellon University.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Dept. of ECE 1 Feature-based Object Tracking Dr. Dapeng Oliver Wu Joint Work with Bing Han, William Roberts, and Jian Li Department of Electrical and Computer.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
The Brightness Constraint
3D SLAM for Omni-directional Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
CSCE 643 Computer Vision: Structure from Motion
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Motion Estimation I What affects the induced image motion?
Fast Semi-Direct Monocular Visual Odometry
Local features: detection and description
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Reconstruction of a Scene with Multiple Linearly Moving Objects Mei Han and Takeo Kanade CISC 849.
Motion from image and inertial measurements
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
The Brightness Constraint
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
The Brightness Constraint
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh

2 On the web Related materials: these slides related papers movies VRML models at:

3 Introduction (1) micro air vehicle (MAV) navigation AeroVironment Black WidowAeroVironment Microbat

4 Introduction (2) mars rover navigation Mars Exploration Rovers (MER)Hyperion

5 Introduction (3) robotic search and rescue Rhex Center for Robot-Assisted Search and Rescue, U. of South Florida

6 Introduction (4) NASA ISS personal satellite assistant

7 Introduction (5) Each of these problems requires: 6 DOF motion in unknown environments without GPS or other absolute positioning over the long term …and some of the problems require: small, light, and cheap sensors

8 Introduction (6) Monocular, image-based motion estimation is a good candidate In particular, simultaneous estimation of: multiframe motion sparse scene structure is the most promising approach

9 Outline Image-based motion estimation Improving estimation Improving feature tracking Reacquisition

10 Outline Image-based motion estimation refresher difficulties Improving estimation Improving feature tracking Reacquisition

11 Image-based motion estimation: refresher (1) A two-step process is typical… First, sparse feature tracking: Inputs: raw images Outputs: projections

12 Image-based motion estimation: refresher (2)

13 Image-based motion estimation: refresher (3) Second, estimation: Input: Outputs:  6 DOF camera position at the time of each image  3D position of each tracked point  projections from tracker

14 Image-based motion estimation: refresher (4)

15 Image-based motion estimation: refresher (5) Algorithms exist For tracking: Lucas-Kanade (Lucas and Kanade, 1981)

16 Image-based motion estimation: refresher (6) For estimation: SVD-based factorization (Tomasi and Kanade, 1992) bundle adjustment (various, 1950’s) Kalman filtering (Broida and Chellappa, 1990) variable state dimension filter (McLauchlan, 1996)

17 Image-based motion estimation: difficulties (1) So, the problem is solved?

18 Image-based motion estimation: difficulties (2) If so, where are the automatic systems for estimating the motion of: in unknown environments? from images in unknown environments?

19 Image-based motion estimation: difficulties (3) …and for automatically modeling rooms buildings cities from a handheld camera?

20 Image-based motion estimation: difficulties (4) Estimation step can be very sensitive to: incorrect or insufficient image feature tracking camera modeling and calibration errors outlier detection thresholds sequences with degenerate camera motions

21 Image-based motion estimation: difficulties (5) …and for recursive methods in particular: poor prior assumptions on the motion poor approximations in state error modeling

22 Image-based motion estimation: difficulties (6) 151 images, 23 points

23 Image-based motion estimation: difficulties (7)

24 Outline Image-based motion estimation Improving estimation overview image and inertial measurements Improving feature tracking Reacquisition

25 Improving estimation: overview

26 Improving estimation: overview

27 Improving estimation: image and inertial (1) Image and inertial measurements are highly complimentary Inertial measurements can: resolve the ambiguities in image-only estimates establish the global scale

28 Improving estimation: image and inertial (2) Images measurements can: reduce the drift in integrating inertial measurements distinguish between rotation, gravity, acceleration, bias, noise in accelerometer readings

29 Improving estimation: image and inertial (3)

30 Improving estimation: image and inertial (4)

31 Improving estimation: image and inertial (5) Other examples: global scale typically within 5% better convergence than image-only estimation

32 Improving estimation: image and inertial (6) Many more details in: Dennis Strelow and Sanjiv Singh. Motion estimation from image and inertial measurements. International Journal of Robotics Research, to appear.

33 Outline Image-based motion estimation Improving estimation Improving feature tracking Lucas-Kanade Lucas-Kanade and real sequences The “smalls” tracker Reacquisition

34 Improving feature tracking: Lucas- Kanade (1) Lucas-Kanade has been the go-to feature tracker from shape-from-motion iteratively minimize the intensity matching error… …with respect to the feature’s position in the new image

35 Improving feature tracking: Lucas- Kanade (2) Additional heuristics used to apply Lucas- Kanade to shape-from-motion: task:heuristic: choose features to trackhigh image texture detect mistracking or occlusion convergence and matching error handle large motionsimage pyramid

36 Improving feature tracking: Lucas- Kanade (3) Lucas-Kanade advantages: fast subpixel resolution can handle some large motions well uses general minimization, so easily extendible

37 Improving feature tracking: Lucas- Kanade (4) 0.1 average pixel reprojection error!

38 Improving feature tracking: Lucas- Kanade and real sequences (1) But Lucas-Kanade performs poorly on many real sequences…

39 Improving feature tracking: Lucas- Kanade and real sequences (2) …and image-based motion estimation can be sensitive to errors in feature tracking

40 Improving feature tracking: Lucas- Kanade and real sequences (3)

41 Improving feature tracking: Lucas- Kanade and real sequences (4)

42 Improving feature tracking: Lucas- Kanade and real sequences (5)

43 Improving feature tracking: Lucas- Kanade and real sequences (6) Why does Lucas-Kanade perform poorly on many real sequences? the heuristics are poor the features are tracked independently task:heuristic: choose features to trackhigh image texture detect mistracking or occlusion convergence and matching error handle large motionsimage pyramid

44 Improving feature tracking: the “smalls” tracker (1) smalls is a new feature tracker for shape-from- motion and similar applications eliminates the heuristics normally used with Lucas-Kanade enforces the rigid scene constraint

45 Improving feature tracking: the “smalls” tracker (2) Leonard Smalls; tracker, manhunter

46 Improving feature tracking: the “smalls” tracker (3) epipolar geometry 1-D matching along epipolar lines geometric mistracking detection feature death and birth outputto 6 DOF featuresestimation

47 Improving feature tracking: the “smalls” tracker (3) epipolar geometry 1-D matching along epipolar lines geometric mistracking detection feature death and birth outputto 6 DOF SIFT featuresestimation features

48 Improving feature tracking: the “smalls” tracker (4) SIFT keypoints (Lowe, IJCV 2004): image interest points can be extracted despite of large changes in viewpoint to subpixel accuracy A keypoint’s feature vectors in two images usually match

49 Improving feature tracking: the “smalls” tracker (5) Epipolar geometry between adjacent images is determined using… SIFT extraction and matching two-frame bundle adjustment RANSAC epipolar geometry SIFT features

50 Improving feature tracking: the “smalls” tracker (6) Search for new feature locations constrained to epipolar lines: 1.initial position from nearby SIFT matches 2.discrete SSD search (e.g.,  60 pixels) 3. 1-D Lucas-Kanade refines the match 1-D matching along epipolar lines

51 Improving feature tracking: the “smalls” tracker (7) Mistracked or occluded features are detected using geometric consistency between triples of images geometric mistracking detection three-frame bundle adjustment RANSAC

52 Improving feature tracking: the “smalls” tracker (8) After tracking in each image: features are pruned to maintain a minimum separation new features are selected in those parts of the image not already covered feature death and birth outputto 6 DOF featuresestimation

53 Improving feature tracking: the “smalls” tracker (9)

54 Improving feature tracking: the “smalls” tracker (10)

55 Improving feature tracking: the “smalls” tracker (11)

56 Improving feature tracking: the “smalls” tracker (12)

57 Outline Image-based motion estimation Improving image-based motion estimation Improving feature tracking Reacquisition

58 Reacquisition (1) Image-based motion estimates from any system will drift: if the features we see are always changing given sufficient time if we don’t recognize when we’ve revisited a location

59 Reacquisition (2)

60 Reacquisition (3)

61 Thanks! Related materials: these slides related papers movies VRML models at: