Camera Parameters and Calibration. Camera parameters From last time….

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Computer Vision, Robert Pless
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Computer vision: models, learning and inference
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Structure from motion.
3D Geometry and Camera Calibration. 3D Coordinate Systems Right-handed vs. left-handedRight-handed vs. left-handed x yz x yz.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Ch. 2: Rigid Body Motions and Homogeneous Transforms
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
3-D Geometry.
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Today: Calibration What are the camera parameters?
3D Graphics Goal: To produce 2D images of a mathematically described 3D environment Issues: –Describing the environment: Modeling (mostly later) –Computing.
CS 450: Computer Graphics 2D TRANSFORMATIONS
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Metric Self Calibration From Screw-Transform Manifolds Russell Manning and Charles Dyer University of Wisconsin -- Madison.
Camera Geometry and Calibration Thanks to Martial Hebert.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
CSE 681 Review: Transformations. CSE 681 Transformations Modeling transformations build complex models by positioning (transforming) simple components.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
1 Fundamentals of Robotics Linking perception to action 2. Motion of Rigid Bodies 南台科技大學電機工程系謝銘原.
Geometric Camera Models
Jinxiang Chai Composite Transformations and Forward Kinematics 0.
Vision Review: Image Formation Course web page: September 10, 2002.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Affine Structure from Motion.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Calibration.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Honours Graphics 2008 Session 2. Today’s focus Vectors, matrices and associated math Transformations and concatenation 3D space.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Single-view geometry Odilon Redon, Cyclops, 1914.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
CS682, Jana Kosecka Rigid Body Motion and Image Formation Jana Kosecka
Calibrating a single camera
Computer vision: models, learning and inference
Depth from disparity (x´,y´)=(x+D(x,y), y)
Homogeneous Coordinates (Projective Space)
Epipolar geometry.
Introduction to Vectors and Frames
A study of the paper Rui Rodrigues, João P
Geometric Camera Models
Uncalibrated Geometry & Stratification
Single-view geometry Odilon Redon, Cyclops, 1914.
Game Programming Algorithms and Techniques
Presentation transcript:

Camera Parameters and Calibration

Camera parameters From last time….

Homogeneous Coordinates (Again)

Extrinsic Parameters: Characterizing Camera position Chasles's theorem: Any motion of a solid body can be composed of a translation and a rotation.

3D Rotation Matrices

3D Rotation Matrices cont’d Euler’s Theorem: An arbitrary rotation can be described by 3 rotation parameters R = For example: More Generally: Most General:

Rodrigue’s Formula Take any rotation axis a and angle  : What is the matrix? with R = e A 

Rotations can be represented as points in space (with care) Turn vector length into angle, direction into axis: Useful for generating random rotations, understanding angular errors, characterizing angular position, etc. Problem: not unique Not commutative

Other Properties of rotations NOT Commutative R1*R2 ≠ R2*R1

Rotations To form a rotation matrix, you can plug in the columns of new coordinate points For Example: The unit x-vector goes to x’:

Other Properties of Rotations Inverse: R -1 = R T rows, columns are othonormal r i T r j = 0 if i≠j, else r i T r i = 1 Determinant: det( R ) = 1 The effect of a coordinate rotation on a function: x’ = R x F( x’ ) = F( R -1 x )

Extrinsic Parameters p’ = R p + t R = rotation matrix t = translation vector In Homogeneous coordinates, p’ = R p + t =>

Intrinsic Parameters Differential Scaling Camera Origin Offset

           Rotation –90 o Scaling *2 Translation           1 y x p = House Points 2D Example

The Whole (Linear) Transformation u =U/W v =U/W Final image coordinates

Non-linear distortions (not handled by our treatment)

Camera Calibration You need to know something beyond what you get out of the camera –Known World points (object) Traditional camera calibration Linear and non-linear parameter estimation –Known camera motion Camera attached to robot Execute several different motions –Multiple Cameras

Classical Calibration

Camera Calibration Take a known set of points. Typically 3 orthogonal planes. Treat a point in the object as the World origin Points x1, x2, x3, Project to y1,y2,y3

Calibration Patterns

Classical Calibration Put the set of points known object points x i into a matrix And projected points u i into a matrix Odd derivation of the Least Squares solution: Solve for the transformation matrix: Next extract extrinsic and intrinsic parameters from  Note this is only for instructional purposes. It does not work as a real procedure.

Real Calibration Real calibration procedures look quite different –Weight points by correspondence quality –Nonlinear optimization –Linearizations –Non-linear distortions –Etc.

Camera Motion

Calibration Example (Zhang, Microsoft) 8 Point matches manually picked Motion algorithm used to calibrate camera

Applications Image based rendering: Light field -- Hanrahan (Stanford)

Virtualized Reality

Projector-based VR UNC Chapel Hill

Shader Lamps

Shape recovery without calibration Fitzgibbons, Zisserman Fully automatic procedure: Converting the images to 3D models is a black-box filter: Video in, VRML out. *We don't require that the motion be regular: the angle between views can vary, and it doesn't have to be known. Recovery of the angle is automatic, and accuracy is about 40 millidegrees standard deviation. In golfing terms, that's an even chance of a hole in one. *We don't use any calibration targets: features on the objects themselves are used to determine where the camera is, relative to the turntable. Aside from being easier, this means that there is no problem with the setup changing between calibration and acquisition, and that anyone can use the software without special equipment. For example, this dinosaur sequence was supplied to us by the University of Hannover without any other information. (Actually, we do have the ground-truth angles so that we can make the accuracy claims above, but of course these are not used in the reconstruction).