Calibration Overview Jason Rebello 10/07/2017

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Introduction to Computer Vision 3D Vision Lecture 4 Calibration CSc80000 Section 2 Spring 2005 Professor Zhigang Zhu, Rm 4439
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
Chapter 6 Feature-based alignment Advanced Computer Vision.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Camera calibration and epipolar geometry
Vision, Video And Virtual Reality 3D Vision Lecture 13 Calibration CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Today: Calibration What are the camera parameters?
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2006 Lecture 4 Camera Calibration Professor Sebastian Thrun.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Camera Geometry and Calibration Thanks to Martial Hebert.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
CSCE 643 Computer Vision: Structure from Motion
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Single-view geometry Odilon Redon, Cyclops, 1914.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
Calibration.
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
3D Computer Vision and Video Computing 3D Vision Topic 2 of Part II Calibration CSc I6716 Fall 2009 Zhigang Zhu, City College of New York
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
3D Computer Vision and Video Computing 3D Vision Topic 2 of Part II Calibration CSc I6716 Spring2013 Zhigang Zhu, City College of New York
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Computer Vision and Video Computing 3D Vision Topic 3 of Part II Calibration CSc I6716 Spring 2008 Zhigang Zhu, City College of New York
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Calibrating a single camera
Geometric Camera Calibration
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Depth from disparity (x´,y´)=(x+D(x,y), y)
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Review of Probability and Estimators Arun Das, Jason Rebello
Structure from motion Input: Output: (Tomasi and Kanade)
Multiple View Geometry for Robotics
Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
Calibration and homographies
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Calibration Overview Jason Rebello 10/07/2017 No moose was hurt in the making of this presentation Jason Rebello | Waterloo Autonomous Vehicles Lab

Calibration | Why do we need calibration ? Data from multiple complementary sensors leads to increased accuracy and robustness Sensors need to be spatially and temporally calibrated Calibration helps transform measurements from different sensors into a common reference frame Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Source: Arun Das Jason Rebello | Waterloo Autonomous Vehicles Lab

Required Calibrations Camera intrinsic and extrinsic IMU to Camera Lidar to GPS Lidar to Camera Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Source: Arun Das Jason Rebello | Waterloo Autonomous Vehicles Lab

Required Calibrations Camera intrinsic and extrinsic IMU to Camera Lidar to GPS Lidar to Camera Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Source: Arun Das Jason Rebello | Waterloo Autonomous Vehicles Lab

Required Calibrations Camera intrinsic and extrinsic IMU to Camera Lidar to GPS Lidar to Camera Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Source: Arun Das Jason Rebello | Waterloo Autonomous Vehicles Lab

Required Calibrations Camera intrinsic and extrinsic IMU to Camera Lidar to GPS Lidar to Camera Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Source: Arun Das Jason Rebello | Waterloo Autonomous Vehicles Lab

Camera Intrinsics Calibration Camera Intrinsics | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Image Formation | Projection pipeline Forward Projection Image to Pixel Coordinates Extrinsics R,t Intrinsics focal length, ox, oy (a) (b) (c) Source: Robert Collins, CSE486 Jason Rebello | Waterloo Autonomous Vehicles Lab

Image Formation | Camera Parameters Extrinsic Transformation (Rotation + Translation) Transforms points from World to Camera coordinate Frame Homogeneous coordinates allow for easy matrix multiplication (b),(c) Perspective Projection Camera Coordinates Pixel Coordinates Source: Robert Collins, CSE486 Jason Rebello | Waterloo Autonomous Vehicles Lab

Tangential Distortion Nikon Lens System Image Formation | Distortions Radial Distortion Source: Scaramuzza Tangential Distortion Source: MathWorks Nikon Lens System Source: Aaron Bobik Jason Rebello | Waterloo Autonomous Vehicles Lab

Camera Calibration using a 2D Checkerboard Intrinsic Parameter Calibration | Camera Calibration using a 2D Checkerboard Known size (30 mm) and structure (6 x 9 ) Identify corners easily. Distorted Corrected Where do I set my world coordinate frame ? Easily identify corners on checkerboard. Location of pixel coordinates If I set my world coordinate frame in a smart way I can know the coordinates of all those points Jason Rebello | Waterloo Autonomous Vehicles Lab

Why use a checkerboard ? | Can set the world coordinate system to one corner of the checkerboard All points lie in the X,Y plane with Z=0 Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab Projection | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab Homography | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab Homography Solution | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Homography constraints | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Find B , recover K through the Cholesky decomposition chol (B) = AAT Intrinsic Parameters | Find B , recover K through the Cholesky decomposition chol (B) = AAT A = K-T Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

V = [ vT12 vT11- vT22 ]T … 2x6 matrix for 1 image Solve for b | Construct a system of linear Equations Vb = 0 V = [ vT12 vT11- vT22 ]T … 2x6 matrix for 1 image Where vij = [ h1ih1j , h1ih2j+h2ih1j , h3ih1j+h1ih3j , h2ih2j , h3ih2j+h2ih3j , h3ih3j ] For n images V = 2n x 6 matrix Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Intrinsic Calibration | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Intrinsic Parameters | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Intrinsic Parameters | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Intrinsic Parameters | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Intrinsic Parameters | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

How do I calibrate a camera ? | Print a target checkerboard (Several available online) Note dimensions of target, size of square Take multiple images by moving target to different locations on the image Calibrate using OpenCV, Matlab Reprojection error below 0.2 is good Ximea camera: 5.3 ㎛ / pix Edmund Optic lens: 3.5 mm Focal length in pixel = 3.5 / (5.3x10-3) = 660.37 Question: What is the problem with this sort of calibration ? Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Camera Extrinsics Calibration Camera Extrinsics | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Rotation and Translation Extrinsic Calibration | Stereo Vision Rotation and Translation Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Advantages of Stereo Vision | Recover Depth Limit the search space in the second image for point correspondence Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Known 3D points in world frame ( Checkerboard Points) PnP Algorithm | Given: Known 3D points in world frame ( Checkerboard Points) Pixel Locations on image plane Intrinsic parameters of Camera Estimate: Transformation from world to camera Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Extrinsic Calibration | Given 3D points in world frame Pw and corresponding 2D pixel locations in both cameras zs and zm Estimate Transformations from world to both cameras using PnP algorithm Ts:w and Tm:w Transform 3D points from world to Fs camera. Ps = Ts:w Pw Transform 3D points from camera Fs to camera Fm via Extrinsic transformation. Pm = Tm:s Ps Project points onto Fm. camera. z’m = ⌅(Pm) Reprojection Error: ∑images ∑points d( z’m , zm )2 + d( z’s , zs )2 Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

How should I calibrate the extrinsics ? | Print a target checkerboard (Several available online) Note dimensions of target, size of square Take multiple images in both cameras at same time by moving target to different locations on the image Calibrate using OpenCV, Matlab Reprojection error below 0.4 is good Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

End of Calibration I | Jason Rebello | Waterloo Autonomous Vehicles Lab

Lidar to Camera Calibration | Target Where does this calibration differ from the extrinsic camera calibration ? What sort of target should be used ? Colour the point cloud, camera gives narrow field of view, lidar is 360. Detection of cars in image and lidar point clouds Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab HDL 32 vs HDL 64 | 32 beam 64 beam 32 beam 64 beam Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Lidar response problems | Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Lidar Camera Calibration | Given 3D points in world frame Pw and corresponding 2D pixel locations zc in the camera Estimate Transformations from world to the camera using PnP algorithm Tc:w. Input initial transformation from lidar to camera (approximate). Tc:l Transform 3D points from Lidar Fl to camera Fw via Extrinsic transformation. Pw = inv( Tc:w )Tc:l Pl Determine which points Pl lie on target within some threshold. Project points on plane Fit ‘generated’ point cloud to detected point cloud and get end-points. Determine end points of target on image Reprojection Error: ∑images ∑points d( ⌅(Tc:l(Pl )) , zc )2 Projection Equation, Distortions Jason Rebello | Waterloo Autonomous Vehicles Lab

Before Calibration After Calibration Calibration Results | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Martin Velas et.al Calibration of RGB Camera With Velodyne LiDAR Alternative methods | Martin Velas et.al Calibration of RGB Camera With Velodyne LiDAR b) Gaurav Pandey et.al Automatic Extrinsic Calibration of Vision and Lidar by Maximizing Mutual Information c) Jesse Levinson et.al Automatic Online Calibration of Cameras and Lasers The mutual information (MI) between two random variables X and Y is a measure of the statistical dependence occurring between the two random variables. Here we consider the laser reflectivity value of a 3D point and the corresponding grayscale value of the image pixel to which this 3D point is projected as the random variables X and Y , respectively.The marginal and joint probabilities of the random variables X and Y can be obtained from the kernel density estimate (KDE) of the normalized marginal and joint histograms of Xi and Yi. Once we have an estimate of the probability distribution we can write the MI of the two random variables as a function of the extrinsic calibration parameters (R, t), thereby formulating an objective function: Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab Questions | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab  Lidar to GPS | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Hand-Eye Calibration | Motion Capture System Reflective Marker Estimate Static Static Camera Estimate Fiducial Target Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

TVM : Marker to Vicon Motion Tracking Hand-Eye Calibration | TVM : Marker to Vicon Motion Tracking TMC : Camera to Marker TVF : Fiducial Target to Vicon TFC : Camera to Fiducial Target Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Planar motion leads to observability issues (Z, roll pitch) Hand-Eye calibration on Car | Calibration with motion capture system and fiducial target similar to Lidar (SLAM) and GPS Planar motion leads to observability issues (Z, roll pitch) Need to find area that allows for sufficient excitation (car motion). Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

World IMU Camera IMU to Camera Calibration | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Quantities Estimated: Gravity direction expressed in World Frame IMU to Camera Calibration | Quantities Estimated: Gravity direction expressed in World Frame Transformation between Camera and IMU Offset between Camera time and IMU time Pose of IMU Accelerometer and gyroscope biases Assumptions Made: Camera Intrinsics are known IMU noise and bias models are known Have a guess for gravity in world frame Have a guess for the calibration matrix Geometry of calibration pattern is known Data association between image point and world point known Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

Jason Rebello | Waterloo Autonomous Vehicles Lab Questions | Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

IMU to Camera Calibration | Time varying states are represented as weighted sum of a finite number of known basis functions. phi(t) assumed to be known yj is a measurement that arrived with timestamp tj. h(.) is a measurement model that produces a predicted measurement from x(.) and d is unknown time offset The analytical Jacobian of the error term, needed for nonlinear least squares is derived by linearizing about a nominal value d, with respect to small changes in d. Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab

IMU Camera calibration | Accelerometer measurement, gyroscope measurement (at time k) pixel location (time t+d), h(.) projection model, n is noise. J is number of images IMU biases Error terms are associated with the measurements constructed as the difference between the measurement and the predicted measurement for current state estimate Dynamics is often stochastic, hence can’t optimize for a particular outcome, but only optimize to obtain a good distribution over outcomes ! Probability provides a framework to reason in this setting " Result: ability to find good control policies for stochastic dynamics and environments Jason Rebello | Waterloo Autonomous Vehicles Lab