Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Computer Vision, Robert Pless
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Camera calibration and epipolar geometry
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Video Processing EN292 Class Project By Anat Kaspi.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Velocity Estimation using the Kinect Sensor Everett Bryan Bryce Pincock 29-Nov
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
CSE473/573 – Stereo Correspondence
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
Stereo Ranging with verging Cameras Based on the paper by E. Krotkov, K.Henriksen and R. Kories.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Introduction to 3D Vision How do we obtain 3D image data? What can we do with it?
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Self-Calibration and Metric Reconstruction from Single Images Ruisheng Wang Frank P. Ferrie Centre for Intelligent Machines, McGill University.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Sensing Camera Model Camera Calibration
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
CSP Visual input processing 1 Visual input processing Lecturer: Smilen Dimitrov Cross-sensorial processing – MED7.
stereo Outline : Remind class of 3d geometry Introduction
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Single-view geometry Odilon Redon, Cyclops, 1914.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
John Morris These slides were adapted from a set of lectures written by Mircea Nicolescu, University of Nevada at Reno Stereo Vision Iolanthe in the Bay.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Date of download: 7/11/2016 Copyright © 2016 SPIE. All rights reserved. Relationship among the intrinsic parameters and the rotation angle; *, the results.
Calibrating a single camera
Processing visual information for Computer Vision
Depth from disparity (x´,y´)=(x+D(x,y), y)
Epipolar geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
Multiple View Geometry for Robotics
Single-view geometry Odilon Redon, Cyclops, 1914.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000

Outline Purpose Camera Calibration Data Collection Object Tracking Depth from Stereo Results Conclusions

Purpose 3D visualization - blood vessels, organs, teapots Stereo displays require head tracking Cumbersome trackers are undesirable

Purpose Passive tracking offers tetherless option Speed and accuracy is a concern

How To Do It Mount reflectors on tracking target Calibrate pair of cameras Identify reflectors in each image Calculate positions using stereo disparity Update tracking model using new positions

Camera Calibration Select six points on known calibration target Solve for six extrinsic and four intrinsic parameters (ignore distortion) Used derivative of Tsai’s calibration code ( Thanks to Herman Towles and Ruigang Yang of Office of the Future for adapted code

Data Collection Mounted target on optical rail Translated target at 1mm per frame over 20cm Rotated target through 120 degrees

Object Tracking We want to follow an object through a scene Challenge - what targets are best for tracking? Spheres yield spatially ambiguous results Complex shapes limit ambiguity but are hard to track Model of glasses with tracking targets

Object Tracking I chose a square and a rectangle - unambiguous, but surprisingly hard to track Square is easy - calculate centroid as weighted average of position and intensity Rectangle is the problem - need center and orientation, makes problem non-trivial Model of glasses with tracking targets

Object Tracking Calculate centroid of rectangle Search for shortest axis at some angle through centroid Find edges along short and long axes using derivatives Use endpoints of long axes as tracking targets Use prior results for future frames Model of glasses with tracking targets

Depth from Stereo Use point pairings to get depth Find shortest segment between the two pixel rays Use midpoint as position estimate Trucco section 7.4 describes equations

Results Translation Moved stage by 210mm Average measured translation: 178mm 16% error Rotation Rotated target by 60 degrees Averaged measured rotation: 70 degrees 16% error Neither showed high noise, so systematic error is likely (calibration?)

Conclusions Camera calibration is somewhat challenging Good camera calibration is very challenging Robust object tracking will require significant development Simplified targets will reduce complexity