Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)

Slides:



Advertisements
Similar presentations
Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and.
Advertisements

Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
More on single-view geometry
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
Computer vision: models, learning and inference
Generation of Virtual Image from Multiple View Point Image Database Haruki Kawanaka, Nobuaki Sado and Yuji Iwahori Nagoya Institute of Technology, Japan.
1 Formation et Analyse d’Images Session 12 Daniela Hall 16 January 2006.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
1Ellen L. Walker Recognizing Objects in Computer Images Ellen L. Walker Mathematical Sciences Dept Hiram College Hiram, OH 44234
Uncalibrated Geometry & Stratification Sastry and Yang
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Recognizing and Tracking Human Action Josephine Sullivan and Stefan Carlsson.
Extension of M-VOTE: Improving Feature Detection
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Image Subtraction for Real Time Moving Object Extraction Shahbe Mat Desa, Qussay A. Salih, CGIV’04.
Camera Parameters and Calibration. Camera parameters From last time….
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Automatic Camera Calibration
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Finish Adaptive Space Carving Anselmo A. Montenegro †, Marcelo Gattass ‡, Paulo Carvalho † and Luiz Velho † †
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
3D SLAM for Omni-directional Camera
May 9, 2005 Andrew C. Gallagher1 CRV2005 Using Vanishing Points to Correct Camera Rotation Andrew C. Gallagher Eastman Kodak Company
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
Automatic Minirhizotron Root Image Analysis Using Two-Dimensional Matched Filtering and Local Entropy Thresholding Presented by Guang Zeng.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
CSE 185 Introduction to Computer Vision Face Recognition.
Finish Hardware Accelerated Voxel Coloring Anselmo A. Montenegro †, Luiz Velho †, Paulo Carvalho † and Marcelo Gattass ‡ †
Object Recognition Based on Shape Similarity Longin Jan Latecki Computer and Information Sciences Dept. Temple Univ.,
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
3D reconstruction from uncalibrated images
Image Registration Advanced DIP Project
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Robust Watermarking of 3D Mesh Models. Introduction in this paper, it proposes an algorithm that extracts 2D image from the 3D model and embed watermark.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Trajectory-Based Ball Detection and Tracking with Aid of Homography in Broadcast Tennis Video Xinguo Yu, Nianjuan Jiang, Ee Luang Ang Present by komod.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
SIFT.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
SIFT Scale-Invariant Feature Transform David Lowe
Presented by David Lee 3/20/2006
Paper – Stephen Se, David Lowe, Jim Little
Image Primitives and Correspondence
Vehicle Segmentation and Tracking in the Presence of Occlusions
Computer Vision Lecture 9: Edge Detection II
ECE 692 – Advanced Topics in Computer Vision
Presented by :- Vishal Vijayshankar Mishra
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Video Compass Jana Kosecka and Wei Zhang George Mason University
SIFT.
Presentation transcript:

Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)

reference points object points Juiz Virtual

reference points object points Juiz Virtual

Proposed algorithm Computation of the planar projective transformation Camera Calibration Filtering to enhance lines Next image in the sequence Detection of long straight-line segments First image of the sequence Line recognition Line readjustment Computation of the initial planar projective transformation

Filtering to enhance lines The Laplacian of Gaussian (LoG) filter is applied to the image with threshold. Gaussian filter Laplacian filter

Detection of long straight line segments Segmentation is done in the image to locate long straight line segments candidate to be field lines. This procedure is divided in two steps: Eliminating pixels that are not in a straight line. Determining straight lines segments.

Eliminating pixels that are not in a straight line The image is divided, by a regular grid, in rectangular cells.

For each of theses cells, the eigenvalues, 1  2 of the covariance matrix, given below, are computed. If 2 = 0 or 1 / 2 > M (given) then eigenvector of 1 is the predominant direction else the cell does not have a predominant direction Eliminating pixels that are not in a straight line

Cells with pixels forming straight line segments: Eliminating pixels that are not in a straight line

Determining straight line segments The cells are traversed in such a way that columns are processed from left to right and the cells in each column are processed bottom-up. Each cell is given a label:  If there is no predominant direction in a cell, discard it.  Otherwise check the three neighbors to the left and the neighbor below the given cell. If any has a predominant direction similar to that of the current cell, then it receives the label of that cell; otherwise, a new label is used for the current cell.

Determining straight line segments Group the cells with the same label Merge the groups that correspond to segments that lie on the same line. At the end of the process, each group provides a straight line segment. 

Field lines recognition From the set of segments, the field lines are detected and the field is recognized.  Model-based recognition method [Grimson90] Set of restrictions F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 f1:f1: f2:f2: Interpretation Tree F1F1 F6F6 F2F2 F3F3 F4F4 F5F5 F7F7 Model f1f1 f2f2 f3f3 f4f4 f5f5 Visualization f6f6 f7f7 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F7F7  F6F6 F5F5 F4F4 F3F3 F2F2

The node {f 1 : F 1, f 2 :F 6, f 3 :F 3 } is discarded because it violates the restriction: The line representing F 6 must be between the lines representing F 1 and F 3. Field lines recognition Discarding nodes F1F1 F6F6 F2F2 F3F3 F4F4 F5F5 F7F7 Model f1f1 f2f2 f3f3 f4f4 f5f5 Visualization f6f6 f7f7

Field lines recognition Choosing the best solution F1F1 F6F6 F2F2 F3F3 F4F4 F5F5 F7F7 Model f1f1 f2f2 f3f3 f4f4 f5f5 Visualization f6f6 f7f7 In general, there are several feasible interpretations; We choose the one where the sum of the length of the matched segments is maximum. f 1 :  f 2 : F 3 f 3 :  f 4 : F 1 f 5 : F 6 f 6 : F 4 f 7 : F 7 f 1 :  f 2 :  f 3 : F 3 f 4 : F 1 f 5 : F 6 f 6 : F 4 f 7 : F 7 WINNER

F7F7 f1f1 f2f2 f3f3 f4f4 f5f5 Visualization f6f6 f7f7 F1F1 F6F6 F2F2 F3F3 F4F4 F5F5 F7F7 Model Field lines recognition Final result f 1 :  f 2 : F 3 f 3 :  f 4 : F 1 f 5 : F 6 f 6 : F 4 f 7 : F 7 f1f1 f2f2 f3f3 f4f4 f5f5 Visualization f6f6 f7f7 F1F1 F6F6 F2F2 F3F3 F4F4 F5F5 Model f 1 :  f 2 : F 3 f 3 :  f 4 : F 1 f 5 : F 6 f 6 : F 2 f 7 : F 5 or

Computation of the initial planar projective transformation A planar projective transformation corresponding to the recognized lines is found (using points of intersection and vanishing points as reference points).  points of intersection vanishing points

Line readjustment tolerance of the line readjustment

Line readjustment reconstructed line tolerance image points new located line The new located line is obtained by a least square method

Computation of the final planar projective transformation After relocating the field lines, a better reconstruction of the field can be obtained.

Camera calibration Camera is calibrated using Tsai’s method for reconstruction of elements not on the plane of the field.

For the first image, we apply the camera calibration process proposed. In order to optimize the process from the second image on, we take advantage of the previous image calibration. The final planar projective transformation for the previous image is used as a initial transformation for the current image. Working with a sequence of images Computation of the planar projective transformation Camera Calibration Next image in the sequence Line readjustment

Result s First scene Last scene The artificial sequence First scene Last scene The real sequence

Result s Computer: Pentium III 600MHz The sequence of test has 27 frames The time of processing: 380 milliseconds (< 900 milliseconds needed to real-time)

Results (accuracy) Field’s Points Correct Coordinates Reconstructed Coordinates Error xyzuvuv Average Error Tab. 1 - Comparison between the correct and reconstructed coordinates for the first scene.

Results (accuracy) Field’s Points Correct Coordinates Reconstructed Coordinates Error xyzuvuv Average Error Tab. 2 - Comparison between the correct and reconstructed coordinates for the last scene.

Conclusions The algorithm presented here has generated good results even when applied to noisy images extracted from TV. The algorithm can be used in widely available computers (no specialized hardware is necessary). Processing time is well below the time needed for real-time processing. Extra time can be used, for example, to draw ads and logos on the field.

Future work Investigate processes for smoothing the sequence of cameras by applying Kalman filtering. Develop techniques to track other objects moving on the field, such as the ball and the players. Draw objects on the field behind the players, to give the impression that the players are walking on them.