Multi video camera calibration and synchronization.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Computer Vision, Robert Pless
3D reconstruction.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Correcting Projector Distortions on Planar Screens via Homography
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Lecture 8: Stereo.
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Single-view metrology
CS485/685 Computer Vision Prof. George Bebis
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Camera model Relation between pixels and rays in space ?
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
The Pinhole Camera Model
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Cameras, lenses, and calibration
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Automatic Camera Calibration
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Image Processing Lecture 2 - Gaurav Gupta - Shobhit Niranjan.
Camera Geometry and Calibration Thanks to Martial Hebert.
Geometric Camera Models and Camera Calibration
Geometric Models & Camera Calibration
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004.
CSCE 643 Computer Vision: Structure from Motion
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
stereo Outline : Remind class of 3d geometry Introduction
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Presented by: Idan Aharoni
3D Reconstruction Using Image Sequence
Camera Model Calibration
776 Computer Vision Jan-Michael Frahm Spring 2012.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
CSE 185 Introduction to Computer Vision
SIFT.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Computer vision: models, learning and inference
Advanced Computer Graphics
SIFT Scale-Invariant Feature Transform David Lowe
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Depth from disparity (x´,y´)=(x+D(x,y), y)
Vehicle Segmentation and Tracking in the Presence of Occlusions
Geometric Camera Models
Multiple View Geometry for Robotics
Camera Calibration Coordinate change Translation P’ = P – O
SIFT.
Image Registration  Mapping of Evolution
Presentation transcript:

Multi video camera calibration and synchronization

Motivation Multi camera applications become common. Example: Stereo, Surveillance …. Using multi camera we can over come problems like hidden objects. In general more cameras equal more information on the scene.

How does it look Multi camera setup

The scene The filmed scene 1/3

The scene The filmed scene 2/3

The scene The filmed scene 3/3

Perspective projection Perspective projection:

The projection matrix Object point Image point Using the model: And the projection matrix (so far) is: *Homogenous coordinates

Internal matrix The internal matrix represent the inner camera settings Focal length (d) Principle point location usually (0,0) Scaling factor

External matrix Includes all the orientation properties of the camera Rotation Translation

Projection matrix sum up Internal parameters External parameters The result p=MP

Calibration Camera calibration is used to coordinate between cameras. Given a 3D point in the real word finding the projected point in the camera. The goal is to fined the projection matrix M. Using known 3D points and there corresponding image points p=MP can be solved.

When a full calibration is not necessary Homography Mapping between a point on a ground plane as seen from one camera, to the same point on the ground plane as seen from a second camera.

When an Homography can be used When the images are of the same plane Camera 1 Camera 2 Result

When an Homography can be used When images taking using same camera by only rotating it

Homography computation Using the Homography matrix H we can map point from one image to second image So we have : p ’ =Hp P and p ’ are given in homogeneous coordinates

Homography computation H is 3x3 That is 8 D.O.F To find H we need 4 corassponding points

Finding corresponding points Manually, detecting by hand similar features. Not accurate Good for 2 cameras, what about 9 or more.

Known solution Automatic Detection of known features. Large working volume need large objects. very hard to detect from far distance.

Features detection in wide base line Noise Hidden parts Assuming detection is possible finding the corresponding is hard.

Example of feature detection problems

Goals of the calibration object 360 degrees view. Robust to noise. Accurate regardless the distance (or zoom). Easy to find corresponding points. Automated as possible.

Solution Use easy to detect features (Active features). Use the benefits of time dimension video. This will create a easy to detect corresponding point list. Find Homography using the list of points.

Calibration object Ultra bright LEDs. Very bright, easy to detect.

Use flickering as identifier features flicker in constant rate Each feature has a different rate The cameras filming in constant rate The LED flicker can be found The result a list of points in an increasing frequency rate for each camera

Detection method first stage Filter unnecessary noise Use the red channel only as filter. What about acceptable red channel filters in RGB such as:R = ((R-B)+(R-G)). Remove white pixels (All channels have high intensities ). Not good for a case a LED caused high saturation (appears as white).

Filter Example Red channel only ((R-B)+(R-G))

Detection method second stage Take advantage of video camera time line The LED is going from on to off state Subtracting following frames (similar to background subtraction). Detect features pixels candidates using a threshold. Save detection frame number to detect flickering rate.

Detection method third stage So far we have points candidate and there frequencies. Yet some of the candidates are noise. Use a frequency as a second filter Most of the noises have a very short and not consist frequency.

Noise and feature frequencies Noise Feature

Frequency filter Before

Frequency filter After

Detection method fourth stage Once we have the LED pixels detected we need to detect a pixel to represent it Local maximum, the pixel with the highest intensity level. Solution to different distances of camera from the features and different zoom rates.

Local maximum example Before

Local maximum example After

Full tool example

Synchronization Given the frame number k in the first camera find the corresponding frame in the second camera. Not all the cameras starts to film in the same time. Known solution using temporal features

Temporal features Hard to find, not suitable for 9 cameras or more

Automatic synchronization Each feature has a different rate The signature is based on the gap between the pools vibrate. Given an index we search for the first time after this index the pool with the lowest frequency vibrate and so on. Given that the polls turned on in t0,t1,t2,t3,t4,t5 the resulting signature is (t1-t0,t2-t1,t3-t2,t4-t3,t5-t4)

Synchronization graph 1/2

Synchronization graph 2/2

Tool synchronization example

The end Thank you!!!