Synchronization and Calibration of Camera Networks from Silhouettes Sudipta N. Sinha Marc Pollefeys University of North Carolina at Chapel Hill, USA.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

1. 2 An extreme occurrence of the missing data W I D E B A S E L I N E – no point in more than 2 images!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Multiple View Reconstruction Class 24 Multiple View Geometry Comp Marc Pollefeys.
N-view factorization and bundle adjustment CMPUT 613.
Structure from motion Class 9 Read Chapter 5. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera.
Robust Object Tracking via Sparsity-based Collaborative Model
Lecture 8: Stereo.
Structure from motion.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Scene Planes and Homographies class 16 Multiple View Geometry Comp Marc Pollefeys.
Multiple View Geometry
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Lecture 21: Multiple-view geometry and structure from motion
Multi-view stereo Many slides adapted from S. Seitz.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
High-Quality Video View Interpolation
Visual 3D Modeling using Cameras and Camera Networks
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Triangulation and Multi-View Geometry Class 9 Read notes Section 3.3, , 5.1 (if interested, read Triggs’s paper on MVG using tensor notation, see.
Assignment 2 Compute F automatically from image pair (putative matches, 8-point, 7-point, iterative, RANSAC, guided matching) (due by Wednesday 19/03/03)
Independent Motion Estimation Luv Kohli COMP Multiple View Geometry May 7, 2003.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Reconstruction Class 23 Multiple View Geometry Comp Marc Pollefeys.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Computer vision: models, learning and inference
Simple Calibration of Non-overlapping Cameras with a Mirror
Structure from images. Calibration Review: Pinhole Camera.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
The Brightness Constraint
Periodic Motion Detection via Approximate Sequence Alignment Ivan Laptev*, Serge Belongie**, Patrick Perez* *IRISA/INRIA, Rennes, France **Univ. of California,
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
The Measurement of Visual Motion P. Anandan Microsoft Research.
Example: line fitting. n=2 Model fitting Measure distances.
CSCE 643 Computer Vision: Structure from Motion
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
Stereo Many slides adapted from Steve Seitz.
Computer Vision, Robert Pless
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Communication Systems Group Technische Universität Berlin S. Knorr A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Peter Henry1, Michael Krainin1, Evan Herbst1,
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
CSE 185 Introduction to Computer Vision Stereo 2.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
The Brightness Constraint
Presented by Omer Shakil
Parameter estimation class 5
Homography From Wikipedia In the field of computer vision, any
Epipolar geometry.
3D Photography: Epipolar geometry
Multiple View Geometry Comp Marc Pollefeys
The Brightness Constraint
Eric Grimson, Chris Stauffer,
Estimating 2-view relationships
Presentation transcript:

Synchronization and Calibration of Camera Networks from Silhouettes Sudipta N. Sinha Marc Pollefeys University of North Carolina at Chapel Hill, USA.

2 Goal To recover the Calibration & Synchronization of a Camera Network from only Live Video or Archived Video Sequences.

3 Motivation Easy Deployment and Calibration of Cameras. –No Offline Calibration ( Patterns, LED etc) –No physical access to environment Possibility of using unsynchronized video streams (camcorders, web-cams etc.) Applications in wide-area surveillance camera networks (3D tracking etc). Digitizing 3D events

4 Why use Silhouettes ? Visual Hull (Shape-from-Silhouette) System Many silhouettes from dynamic objects Background segmentation Feature-based ? Features Matching hard for wide baselines Little overlap of backgrounds Few features on foreground

5 Prior Work : Calibration from Silhouettes Epipolar Geometry from Silhouettes Porrill and Pollard, ’91 Astrom, Cipolla and Giblin, ’96 Structure-and-motion from Silhouettes Vijayakumar, Kriegman and Ponce’96 (orthographic) Furukawa and Ponce’04 (orthographic) Wong and Cipolla’01 (circular motion, at least to start) Yezzi and Soatto’03 (needs initialization) Sequence to Sequence Alignment Caspi, Irani,’02 (feature based)

6 Our Approach Compute Epipolar Geometry from Silhouettes in synchronized sequences (CVPR’04). Here, we extend this to unsynchronized sequences. Synchronization and Calibration of camera network.

7 Multiple View Geometry of Silhouettes Frontier Points Epipolar Tangents Always at least 2 extreme frontier points per silhouette Only 2-view correspondence in general. x1x1 x2x2 x’ 1 x’ 2

8 Camera Network Calibration from Silhouettes 7 or more corresponding frontier points needed to compute epipolar geometry Hard to find on single silhouette and possibly occluded However, video sequences contain many silhouettes.

9 Camera Network Calibration from Silhouettes If we know the epipoles, draw 3 outer epipolar tangents (need at least two silhouettes in each view) Compute an epipolar line homography H -T Epipolar Geometry F=[e] x H

10 RANSAC-based algorithm Repeat { Generate a Hypothesis for the Epipolar Geometry Verify the Model } Refine the best hypothesis. Note : RANSAC is used to explore 4D space of epipoles apart from dealing with noisy silhouettes

11 Compact Representation for Silhouettes Tangent Envelopes Store the Convex Hull of the Silhouette. Tangency Points for a discrete set of angles. Approx. 500 bytes/frame. Hence a whole video sequences easily fits in memory. Tangency Computations are efficient.

12 RANSAC-based algorithm Generate Hypothesis for Epipolar Geometry Pick 2 corresponding frames, pick random tangents for each of the silhouettes. Compute epipoles. Pick 1 more tangent from additional frames Compute homography Generate Fundamental Matrix.

13 RANSAC-based algorithm Verify the Model For all tangents Compute Symmetric Epipolar Transfer Error Update Inlier Count (Abort Early if Hypothesis doesn’t look Promising)

14 What if videos are unsychronized ? For fixed fps video, same constraints are valid up to an extra unknown temporal offset. Add a random temporal offset to RANSAC hypothesis. Use multi-resolution approach: –Keyframes with slow motion, rough synchronization –ones with fast motion provide fine synchronization

15 Computed Fundamental Matrices

16 Synchronization experiment Total temporal offset search range [-500,+500] (i.e. ±15 secs.) Unique peaks for correct offsets Possibility for sub-frame synchronization # Promising Candidates Sequence Offset (# frames) # Iterations (In millions)

17 Camera Network Synchronization Consider directed graph with offsets as branch value For consistency loops should add up to zero MLE by minimizing ground truth in frames (=1/30s)

18 From epipolar geometry to full calibration Solve for camera triplet (Levi and Werman, CVPR’03; Sinha et al. CVPR’04) Assemble complete camera network.

19 Metric Cameras and Visual-Hull Reconstruction from 4 views Final calibration quality comparable to explicit calibration procedure

20 Validation experiment: Reprojection of silhouettes

21 Taking Sub-frame Synchronization into account Reprojection error reduced from 10.5% to 3.4% of the pixels in the silhouette Temporal Interpolation of Silhouettes. to appear (Sinha, Pollefeys, 3DPVT’04)

22 Conclusion and Future Work Camera network calibration & synchronization just from dynamic silhouettes. Great for visual-hull systems. Applications for surveillance systems. Extend to active PTZ camera network and asynchronous video streams. Acknowledgments NSF Career, DARPA. Peter Sand, (MIT) for Visual Hull dataset.

23 Computed Fundamental Matrices F computed directly (black epipolar lines) F after consistent 3D reconstruction (color)