Download presentation
Presentation is loading. Please wait.
1
Surveillance and Security
2
Project Industrial site Monitored by two camera’s
3
Camera Calibration Thomas Flamant
4
Goal Camera Calibration Reconstruction Camera matrix
get_mapped.m (WP3) Calibration
5
Camera Calibration (Camera resectioning)
= the process of finding the true parameters of the camera that produced a given photograph or video Pinhole camera model Camera matrix Represents the camera parameters Calibration
6
Pinhole camera model Describes the mathematical relationship between the coordinates of a 3D point and its projection onto the image plane of an ideal pinhole camera Calibration
7
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = ??? ∗ ??? ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters Calibration
8
Intrinsic Parameters (intrinsic calibration)
Used to convert between the coordinates of the image plane and the actual pixel-coordinates Depends on camera hardware Consists of: Focal length 𝑓 𝑥 , 𝑓 𝑦 Camera centre 𝑐 𝑥 , 𝑐 𝑦 Lens distortion ? Barrel Pincushion mustache 𝐴= 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 Calibration
9
Extrinsic Parameters (extrinsic calibration)
Used to describe the relative position and orientation of the camera with regards to a world coordinate system Does not depend on camera hardware Consists of: 3x3 rotation matrix 𝑅 3x1 translation vector 𝑡 𝑅= 𝑟 11 𝑟 12 𝑟 13 𝑟 21 𝑟 22 𝑟 23 𝑟 31 𝑟 32 𝑟 33 𝑡= 𝑡 𝑥 𝑡 𝑦 𝑡 𝑧 Calibration
10
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = ??? ∗ ??? ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters Calibration
11
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝐴 ∗ 𝑅 | 𝑡 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters Calibration
12
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 12 𝑟 𝑡 1 𝑟 21 𝑟 22 𝑟 𝑡 2 𝑟 31 𝑟 32 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters Calibration
13
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 12 𝑟 𝑡 1 𝑟 21 𝑟 22 𝑟 𝑡 2 𝑟 31 𝑟 32 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters (Bouguet) (POSIT) Calibration
14
Camera Calibration Toolbox for Matlab (Bouguet)
Input: 20+ images of a planar checkerboard Output: Intrinsic parameters Corner extraction Automatic: RADOCC Camera Calibration Toolbox (CornerFinder.m) Calibration
15
Camera Calibration Toolbox for Matlab (Bouguet)
Input: 30 images of each camera (squares = 28mm) Camera 1: Camera 2: Calibration
16
Camera Calibration Toolbox for Matlab (Bouguet)
Input: 30 images of each camera (squares = 28mm) Camera 1: Camera 2: Calibration
17
Camera Calibration Toolbox for Matlab (Bouguet)
Output: intrinsic parameters of each camera Camera 1: Camera 2: 𝐴 𝑐𝑎𝑚1 = 𝐴 𝑐𝑎𝑚2 = Calibration
18
POSIT Description: POSIT is a fast iterative algorithm for finding the pose (rotation and translation) of an object or scene with respect to a camera when points of the object are given in some object coordinate system and these points are visible in the camera image and recognizable, so that corresponding image points and object points can be listed in the same order. [rotation, translation] = Posit(imagePoints, objectPoints, focalLength, center) Calibration
19
POSIT Input: [rotation, translation] =
Posit(imagePoints, objectPoints, focalLength, center) Input: nbPts: 4+ noncoplanar feature points of the object imagePoints: matrix of size nbPts x 2 objectPoints: matrix of size nbPts x 3 focalLength: focal length of the camera in pixels Center: row vector with the elements of the image center Camera 2 Calibration
20
POSIT Input: Output: [rotation, translation] =
Posit(imagePoints, objectPoints, focalLength, center) Input: nbPts: 4+ noncoplanar feature points of the object imagePoints: matrix of size nbPts x 2 objectPoints: matrix of size nbPts x 3 focalLength: focal length of the camera in pixels Center: row vector with the elements of the image center Output: rotation: 3 x 3 rotation matrix of scene with respect to camera translation: 3 x 1 translation vector from projection center of camera to FIRST POINT in list of object points Calibration
21
POSIT Output: extrinsic parameters of each camera Camera 1: Camera 2:
𝑅 𝑐𝑎𝑚1 = − − − − − −0.5908 𝑡 𝑐𝑎𝑚1 = 𝑅 𝑐𝑎𝑚2 = − − − − − −0.4058 𝑡 𝑐𝑎𝑚2 = Calibration
22
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 12 𝑟 𝑡 1 𝑟 21 𝑟 22 𝑟 𝑡 2 𝑟 31 𝑟 32 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Intrinsic parameters Extrinsic parameters (Bouguet) (POSIT) Calibration
23
Camera matrix Describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image. Camera 1: Camera 2: 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = ∗ − − − − − − ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = ∗ − − − − − − ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Calibration
24
Goal Camera Calibration Reconstruction Camera matrix
get_mapped.m (WP3) Calibration
25
Reconstruction 3D-reconstruction: Triangulation
2D-reconstruction: Calibration matrix Calibration
26
3D-reconstruction (Triangulation)
Epipolar geometry: When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. Calibration
27
3D-reconstruction (Triangulation)
The process of determining a point in 3D space given its projections onto two, or more, images. In order to solve this problem it is necessary to know the parameters of the camera projection function from 3D to 2D for the cameras involved, represented by the camera matrices Calibration
28
2D-reconstruction (Calibration matrix)
3 dimensions ? 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 12 𝑟 𝑡 1 𝑟 21 𝑟 22 𝑟 𝑡 2 𝑟 31 𝑟 32 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Camera 2 Calibration
29
2D-reconstruction (Calibration matrix)
2 dimensions ! 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 12 𝑟 𝑡 1 𝑟 21 𝑟 22 𝑟 𝑡 2 𝑟 31 𝑟 32 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 𝑧 𝑤𝑜𝑟𝑙𝑑 1 Camera 2 Calibration
30
2D-reconstruction (Calibration matrix)
2 dimensions ! 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 = 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 𝑡 1 𝑟 21 𝑟 𝑡 2 𝑟 31 𝑟 𝑡 3 ∗ 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 1 Camera 2 Calibration
31
2D-reconstruction (Calibration matrix)
2 dimensions ! 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 1 = 𝑖𝑛𝑣𝑒𝑟𝑠 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 𝑡 1 𝑟 21 𝑟 𝑡 2 𝑟 31 𝑟 𝑡 ∗ 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 Camera 2 Calibration
32
2D-reconstruction (Calibration matrix)
2 dimensions ! Function get_mapped.m WP3 function [ x_world , y_world ] = get_mapped(camera_nr, x_pixel, y_pixel) 𝑥 𝑤𝑜𝑟𝑙𝑑 𝑦 𝑤𝑜𝑟𝑙𝑑 1 = 𝑖𝑛𝑣𝑒𝑟𝑠 𝑓 𝑥 0 𝑐 𝑥 0 𝑓 𝑦 𝑐 𝑦 ∗ 𝑟 11 𝑟 𝑡 1 𝑟 21 𝑟 𝑡 2 𝑟 31 𝑟 𝑡 ∗ 𝑥 𝑝𝑖𝑥𝑒𝑙 𝑦 𝑝𝑖𝑥𝑒𝑙 1 Calibration
33
Motion detection Hannes Van De Vreken
34
Person detection Jan Heuninck
35
Demonstration
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.