Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.
Computer Vision, Robert Pless
Lecture 11: Two-view geometry
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Computer vision: models, learning and inference
Chapter 6 Feature-based alignment Advanced Computer Vision.
Surveillance and Security
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Omnidirectional camera calibration
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
CSc D Computer Vision – Ioannis Stamos 3-D Computer Vision CSc Camera Calibration.
Camera calibration and epipolar geometry
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Sebastian Thrun and Jana Kosecha CS223B Computer Vision, Winter 2007 Stanford CS223B Computer Vision, Winter 2007 Lecture 4 Camera Calibration Professors.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Panoramas and Calibration : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Today: Calibration What are the camera parameters?
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2006 Lecture 4 Camera Calibration Professor Sebastian Thrun.
Automatic Camera Calibration
Last Lecture (optical center) origin principal point P (X,Y,Z) p (x,y) x y.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 2 Lenses and Camera Calibration Sebastian Thrun,
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Single-view geometry Odilon Redon, Cyclops, 1914.
Ch. 3: Geometric Camera Calibration
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
EECS 274 Computer Vision Affine Structure from Motion.
October 5, Memorial University of Newfoundland Camera Calibration In the field of machine vision, camera calibration refers to the experimental.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
June 10, 2013 Presented by Joshua Vallecillos Supervised By Christine Wittich, Ph.D. Student, Structural Engineering.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Date of download: 7/11/2016 Copyright © 2016 SPIE. All rights reserved. Relationship among the intrinsic parameters and the rotation angle; *, the results.
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Geometric Camera Calibration
Computer vision: models, learning and inference
Depth from disparity (x´,y´)=(x+D(x,y), y)
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
More Mosaic Madness : Computational Photography
Overview Pin-hole model From 3D to 2D Camera projection
Digital Visual Effects Yung-Yu Chuang
Lecturer: Dr. A.H. Abdul Hafez
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Noah Snavely.
Camera Calibration Coordinate change Translation P’ = P – O
Camera Calibration Reading:
Presentation transcript:

Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania

Camera Calibration is mainly concerned with finding the parameters intrinsic to the camera that affect the imaging process, namely: The focal length of the lens The principal point of the lens Scaling & Skew factors The lens projection/distortion function These parameters are important and necessary for various Structure From Motion, 3D reconstruction and other applications where the relation between world points and camera pixels are needed.

The need to calibrate a fisheye camera in order to solve the odometry problem. Initially tried out a few existing toolboxes, namely the Camera Calibration Toolbox for Matlab & Omnidirectional camera calibration Toolbox Results were not good enough which led us to work on our own calibration method.

Fisheye lenses are those which have a very high Field Of View ( >= 150 degrees). Fisheye lenses achieve extremely wide angles of view by forgoing a rectilinear image, opting instead for a special mapping (for example: equisolid angle), which gives images a characteristic convex appearance. Due to the high degree of distortion in the image, such images are difficult to process using conventional techniques which are meant for perspective cameras. Our camera is fitted with a 190 degree FOV lens from Omnitech Robotics.

An Image from the camera

Lots of work in Photogrammetry & more recently in the Computer Vision community. Roughly two categories: Photogrammetric calibration Uses a Calibration object. Self Calibration Uses the motion of camera in a static scene. Correspondences between 3 images are enough to recover the parameters. Other techniques exist, which use orthogonal vanishing points or pure rotation of cameras for calibration.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

Slide courtesy of Prof. Ramani Duraiswami, Dept. of Computer Science, UMCP.

 In all the above slides, the assumption was that there is no lens distortion.  In the presence of distortion, the actual pixel location will be different from [x pix y pix ] T  Two main distortion effects:  Radial Distortion : Barrel, Pincushion distortion  Tangential Distortion : due to “Decentering”  Widely used “Plumb Bob” distortion model was introduced by Brown in 1966.

 [u, v] are final pixel values after distortion.  K is the Intrinsic camera matrix.

 Print a calibration pattern (checkerboard) & attach it to a planar surface.  Take a few pictures of the pattern under different orientations by moving the camera or the pattern.  Detect the corners of the checkerboard.  Now we have a set of world points whose image correspondences are known.  Get an initial estimate of the Rotation & Translation ( & maybe intrinsic parameters) between the world frame& camera frame using the DLT algorithm(many other ways to do this).

 Minimise the reprojection error using some non- linear minimisation technique to get better estimates of parameters.  Lots of ways to actually compute the parameters.  Huge amount of literature using different techniques.

 The fisheye lens can be modeled as a “projection function” that takes a point on the unit sphere onto a point on the image plane.  In spherical coordinates, this is a function of the elevation angle (θ) & maybe the angle in the plane (φ).  It is of the form, where r = radial distance of a point from the image centre. g = lens projection function.

 Examples of general lens projection functions which conform to different models are: ◦ Equidistant projection model: ◦ Equisolid projection model : ◦ Stereographic projection model: ◦ Orthographic projection model: Where f = focal length of the camera.

 In our current calibration, we have modeled this “projection function” as a polynomial function of θ. where the A i ’s are constants to be estimated & “φ” is the angle in the image plane.  Initially, we had tried to use the general fisheye models (shown before) as a basis for “g”, but the accuracy achieved was not good enough.

 Initially, we have a set of checkerboard images.  After corner extraction, we have a set of points in the image which have corresponding world points.  We initially assume that, g(θ,φ) = f*θ  Also, assume the principal point [u 0,v 0 ] T to be at the centre of the image.

 “m” is a pixel.  “m 0 ” is the principal point.  f i is an initial estimate of the focal length.  is a point in the camera reference frame.

 M is a point in the world reference frame.  and M are related by a rotation & translation.  This is found using the procedure in [1].

 With the initial estimates of R & t, we now minimise the reprojection error to find the suitable parameters.

where [x, y] is the actual image point. By minimising this, we can get the parameters that best suit the camera & lens.

 Initial testing with a set of 5 images shows good results.  Reprojection error in the range of 0.3 – 0.5 pixels.  Minimization is done just a single time and the dimension of the feature vector is very large.  Also, and s=0 for this trial (no skew & pixels are assumed to be square).

 Analyzing the results show that most of the error is due to poor corner detection.  Better corner detection may increase accuracy.  Lots of optimizations can be done to improve speed & performance.

 Repeating the whole procedure more than once may yield better results.  Can be made more robust by use of better geometric constraints?  Can be theoretically extended to perspective cameras as well.

[1] Richard J. Hanson And Michael J. Norris, “Analysis Of Measurements Based On The Singular Value Decomposition,” Siam J. Sci. Stat. Comput., Vol. 2, No. 3, September 1981 [2] D. C. Brown, “Decentering distortion of lenses,” Photograph. Eng. 32, 444–462 (1966). [3] R. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Automat. 3, 323–344 (1987). [4] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [5] Camera Calibration Toolbox for Matlab, [6] Lecture Notes from CMSC 828D: Fundamentals of Computer Vision [7] Ciarán Hughes, Patrick Denny, Edward Jones, and Martin Glavin, Accuracy of fish-eye lens models, Applied Optics / Vol. 49, No. 17 / 10 June 2010