Presentation is loading. Please wait.

Presentation is loading. Please wait.

What does calibration give?

Similar presentations


Presentation on theme: "What does calibration give?"— Presentation transcript:

1 What does calibration give?
An image line l defines a plane through the camera center with normal n=KTl measured in the camera’s Euclidean frame. In fact the backprojection of l is PTl  n=KTl

2 The image of the absolute conic
mapping between p∞ to an image is given by the planar homogaphy x=Hd, with H=KR absolute conic (IAC), represented by I3 within p∞ (w=0) its image (IAC) IAC depends only on intrinsics angle between two rays DIAC=w*=KKT w  K (Cholesky factorization) image of circular points belong to w (image of absolute conic)

3 A simple calibration device
compute Hi for each square (corners  (0,0),(1,0),(0,1),(1,1)) compute the imaged circular points Hi [1,±i,0]T fit a conic w to 6 imaged circular points compute K from w=K-T K-1 through Cholesky factorization (= Zhang’s calibration method)

4 Orthogonality = pole-polar w.r.t. IAC

5 The calibrating conic

6 Vanishing points

7 Vanishing lines

8

9 Orthogonality relation

10 Calibration from vanishing points and lines

11 Calibration from vanishing points and lines

12

13 Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction
Structure comp.

14 Three questions: Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point x’ in the second image? (ii) Camera geometry (motion): Given a set of corresponding image points {xi ↔x’i}, i=1,…,n, what are the cameras P and P’ for the two views? (iii) Scene geometry (structure): Given corresponding image points xi ↔x’i and cameras P, P’, what is the position of (their pre-image) X in space?

15 The epipolar geometry C,C’,x,x’ and X are coplanar

16 The epipolar geometry What if only C,C’,x are known?

17 The epipolar geometry All points on p project on l and l’

18 The epipolar geometry Family of planes p and lines l and l’
Intersection in e and e’

19 The epipolar geometry epipoles e,e’
= intersection of baseline with image plane = projection of projection center in other image = vanishing point of camera motion direction an epipolar plane = plane containing baseline (1-D family) an epipolar line = intersection of epipolar plane with image (always come in corresponding pairs)

20 Example: converging cameras

21 Example: motion parallel with image plane

22 Example: forward motion

23 The fundamental matrix F
algebraic representation of epipolar geometry we will see that mapping is (singular) correlation (i.e. projective mapping from points to lines) represented by the fundamental matrix F

24 The fundamental matrix F
geometric derivation mapping from 2-D to 1-D family (rank 2)

25 The fundamental matrix F
algebraic derivation e’ is the image of C taken by the second camera

26 The fundamental matrix F
correspondence condition The fundamental matrix satisfies the condition that for any pair of corresponding points x↔x’ in the two images

27 The fundamental matrix F
F is the unique 3x3 rank 2 matrix that satisfies x’TFx=0 for all x↔x’ Transpose: if F is fundamental matrix for (P,P’), then FT is fundamental matrix for (P’,P) Epipolar lines: l’=Fx & l=FTx’ Epipoles: on all epipolar lines, thus e’TFx=0, x e’TF=0, similarly Fe=0;  e’ is left null space of F F has 7 d.o.f. , i.e. 3x3-1(homogeneous)-1(rank2) F is a correlation, projective mapping from a point x to a line l’=Fx (not a proper correlation, i.e. not invertible)

28 The epipolar line geometry
l,l’ epipolar lines, k line not through e  l’=F[k]xl and symmetrically l=FT[k’]xl’ (pick k=e, since eTe≠0)

29 Fundamental matrix for pure translation

30 Fundamental matrix for pure translation

31 Fundamental matrix for pure translation
example: motion starts at x and moves towards e, faster depending on Z pure translation: F only 2 d.o.f., xT[e]xx=0  auto-epipolar

32 General motion

33 Geometric representation of F
Fs: Steiner conic, 5 d.o.f. Fa=[xa]x: pole of line ee’ w.r.t. Fs, 2 d.o.f.

34 Geometric representation of F

35 Pure planar motion Steiner conic Fs is degenerate (two lines)

36 Projective transformation and invariance
Derivation based purely on projective concepts F invariant to transformations of projective 3-space unique not unique canonical form

37 Projective ambiguity of cameras given F
previous slide: at least projective ambiguity this slide: not more! Show that if F is same for (P,P’) and (P,P’), there exists a projective transformation H so that P=HP and P’=HP’ ~ ~ ~ lemma: (22-15=7, ok)

38 Canonical cameras given F
F matrix corresponds to P,P’ iff P’TFP is skew-symmetric F matrix, S skew-symmetric matrix (fund.matrix=F) Possible choice: Canonical representation:

39

40 PROJECTIVE RECONSTRUCTION
Computation of F1i Ransac: - 8 point correspondence samples Computation of canonical cameras from F1i Triangulation of points in 3D -Compute viewing rays from cameras Intersect viewing rays associated to corresponding points

41 The essential matrix ~fundamental matrix for calibrated cameras (remove K) 5 d.o.f. (3 for R; 2 for t up to scale) E is essential matrix if and only if two singularvalues are equal (and third=0) SVD

42 Motion from E Given Four solutions and

43 (only one solution where points is in front of both cameras)
Four possible reconstructions from E (only one solution where points is in front of both cameras)

44 Self-calibration

45 Motivation Avoid explicit calibration procedure Complex procedure
Need for calibration object Need to maintain calibration

46 Motivation Allow flexible acquisition No prior calibration necessary
Possibility to vary intrinsics Use archive footage

47 Constraints ? Scene constraints Camera extrinsics constraints
Parallellism, vanishing points, horizon, ... Distances, positions, angles, ... Unknown scene  no constraints Camera extrinsics constraints Pose, orientation, ... Unknown camera motion  no constraints Camera intrinsics constraints Focal length, principal point, aspect ratio & skew Perspective camera model too general  some constraints

48 Constraints on intrinsic parameters
Constant e.g. fixed camera: Known e.g. rectangular pixels: square pixels: principal point known:

49 Self-calibration Upgrade from projective structure to metric structure using constraints on intrinsic camera parameters Constant intrinsics Some known intrinsics, others varying Constraints on intrincs and restricted motion (e.g. pure translation, pure rotation, planar motion) (Moons et al.´94, Hartley ´94, Armstrong ECCV´96) (Faugeras et al. ECCV´92, Hartley´93, Triggs´97, Pollefeys et al. PAMI´98, ...) (Heyden&Astrom CVPR´97, Pollefeys et al. ICCV´98,...)

50 A counting argument To go from projective (15DOF) to metric (7DOF) at least 8 constraints are needed Minimal sequence length should satisfy Independent of algorithm Assumes general motion (i.e. not critical)

51 known internal parameters
rectangular pixels square pixels

52 same intrinsics  same image of the absolute conic
Same camera for all images same intrinsics  same image of the absolute conic e.g. moving cameras given sufficient images there is in general only one conic that projects to the same image in all images, i.e. the absolute conic This approach is called self-calibration, see later transfer of IAC:

53 Direct metric reconstruction using w
approach 1 calibrated reconstruction approach 2 compute projective reconstruction back-project w from both images intersection defines W∞ and its support plane p∞ (in general two solutions)

54

55 Direct reconstruction using ground truth
use control points XEi with know coordinates to go from projective to metric (2 lin. eq. in H-1 per view, 3 for two views)

56 e.g. Compute F from xi↔x‘i Compute P,P‘ from F
Triangulate Xi from xi↔x‘i Obtain projective reconstruction (P,P‘,{Xi}) e.g.

57  t=0 The metric reconstruction will be
Let the unknown plane at the infinity in the starting projective reconstruction be From

58 Therefore, and thus

59 Now Therefore, since Self-calibration equation 8 unknowns: and

60 Equations come from constraints on intrinsics:
known parameters (e.g. s=0 or square pixels) fixed parameters (e.g. s, or aspect ratio) needed views: n, with Solution: nonlinear algorithms (Cipolla)

61

62 F F,H∞ p∞ w,w’ W∞ Image information provided
View relations and projective objects 3-space objects reconstruction ambiguity point correspondences F projective point correspondences including vanishing points F,H∞ p∞ affine Points correspondences and internal camera calibration w,w’ W∞ metric

63 Uncalibrated visual odometry for ground plane motion
(joint work with Simone Gasparini)

64 Outline Purpose of the work Motivation Problem formulation
Estimation of robot displacement Ground plane to image plane transformation Experimental results Conclusion

65 Problem formulation Given: Determine:
an uncalibrated camera mounted on a robot the camera is fixed and aims at the floor the robot moving on a planar floor (ground plane) Determine: the estimate of robot motion from observed features on the floor

66 Technique involved Estimate the ground plane transformation (homography) between images taken before and after robot displacement

67 Motivations Dead reckoning techniques are not reliable and diverge after few steps [Borenstein96] Visual odometry techniques exploits cameras to recover motion We use a single uncalibrated camera 3D reconstruction with uncalibrated camera usually require auto-calibration Non-planar motion is required [Triggs98] Planar motion with different camera attitudes [Knight03] Special devices is required (e.g. PTZ cameras) Stereo cameras approaches [Nister04, Takaoka04, Agrawal06, Cheng06] Catadioptric cameras approaches [Bunschoten03, Corke04] Assume Single View Point (difficult setup) Our method similar in spirit to [Wang05] and [Benhimane06] We do not assume camera calibration

68 Problem formulation Fixed uncalibrated camera mounted on a robot
The pose of the camera w.r.t. the robot is unknown The projective transformation between ground plane and image plane is a homography T (3x3) T does not change with robot motion T is uknown

69

70 Problem formulation Robot and camera undergo a planar motion
rotation of unknown angle q about unknown vertical axis 2D rotation matrix R (3x3 with homogeneous coordinates)

71 Problem formulation Given 2 images before and after robot displacement determine: Rotation centre C Rotation angle q Determine unknown transformation T between ground plane and image plane

72 Ground reference frame
We define a ground reference frame O is the origin of the projected reference frame E.g. O is the backprojection of image point O’(0,0) The vector connecting the origin to a point A is the unit vector along the x-axis E.g. A is the backprojection of image point A’(100,0)

73

74 Estimation of robot displacement
Transformation relating the two images is still a homography Eigenvectors of H:

75 Since eig(H)=eig(R) Angle q can be calculated through the ratio of imaginary and real part of the complex eigenvalue of H E.g. the eigenvalue associated to I’ is me±iq

76 Estimation of robot displacement: degenerate motion
Pure translation: a frequent motion on planar floor Under pure translation, the whole line at the infinity is invariant

77 ground plane can be only rectified modulo an affine transformation:
e.g., orientation of translaton wrt ground reference can not be determined In practice: Use a motion including rotations to estimate ground plane to image plane transformation T Use knowledge of T while translating

78 Estimation of the transformation T
Estimating T allows to determine the shape of observed features Four pairs of corresponding points are needed The 2 circular points I and J The 2 points A and O used for ground reference

79

80 Experimental set up Fixed perspective camera placed on turntable ( ground truth) Optical distortion is negligible Camera pointing towards the ground floor Camera viewpoint in a generic position wrt rotation axis

81 Basic algorithm: Feature (corner) extraction from floor texture [Harris88] Feature tracking and matching in order to determine correspondences Corresponding features used to fit the homography H Eigendecomposition of matrix H Rotation angle Image of circular points Image of rotation centre Ground plane to image plane transformation

82 Experiments with synthetic video
Synthetic video of a camera pointing to a plane with a “Granite” texture rendered with povray The camera rotates about a known visible reference of about 15° 60 frames with equal rotational displacement (0.25°) Features tracked using Lucas-Kanade tracker provided by openCV [Bouguet00] Mean error on angle estimate ~0.0015° Rotation axis Ground plane

83 Experiments with synthetic video

84 Experiments on sequences of images
Sequence of large displacements Image displacements of the order of 10° Features extracted and matched [Torr93]

85 Use best matching pairs to fit a homography using RANSAC [Fischler81]
Good overall accuracy (error < 1°) Larger displacements affect matching

86

87 Experiments on sequences of images
Sequence of small displacements Image displacements of about 5° Small rotations may lead to numerical instability Track features over three images H13 fitted with correspondences of 1 and 3 Good overall accuracy (error < 1°)

88 1 2 3 H13

89 Experiments on a mobile platform
Feature extraction and matching between images Rotation and relevant centre of rotation determination

90 Conclusions and ongoing activities
A method to estimate odometry of a mobile robot through a single uncalibrated fixed camera Salient points extracted from floor texture used to estimate homography H between images before and after robot motion Once H is known, transformation T between ground plane and image plane can be estimated Ongoing work: Further experiments on real robots Reliability improvement using affine-invariant matching function e.g. SIFT [Lowe04] Real time version running with openCV Future work: Attempt to employ catadioptric cameras to exploit large field of view (transformation is no more a homography)

91 References [Borenstein96] Borenstein, J., Feng, L.: Measurement and correction of systematic odometry errors in mobile robots. IEEE Transaction on Robotics and Automation 12 (1996) 869–880 [Triggs98] Triggs, B.: Autocalibration from planar scenes. In: Proceedings of the European Conference on Computer Vision (ECCV ’98), London, UK, Springer-Verlag (1998) 89–105 [Knight 03] Knight, J., Zisserman, A., Reid, I.: Linear auto-calibration for ground plane motion. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR ’03). Volume 1., Los Alamitos, CA, USA, IEEE Computer Society (2003) 503–510 [Nister04] Nister, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR ’04). Volume 1., Los Alamitos, CA, USA, IEEE Computer Society (2004) 652–659 [Takaoka04] Takaoka, Y., Kida, Y., Kagami, S., Mizoguchi, H., Kanade, T.: 3d map building for a humanoid robot by using visual odometry. In: Proceedings IEEE International Conference on Systems, Man, and Cybernetics. Volume 5., Los Alamitos, CA, USA, IEEE Computer Society (2004) 4444–4449 [Agrawal06] Agrawal, M., Konolige, K.: Real-time localization in outdoor environments using stereo vision and inexpensive gps. In: Proceedings of the International Conference on Pattern Recognition (ICPR ’06). Volume 3., Los Alamitos, CA, USA, IEEE Computer Society (2006) 1063–1068 [Cheng06] Cheng, Y., Maimone, M., Matthies, L.: Visual odometry on the mars exploration rovers - a tool to ensure accurate driving and science imaging. IEEE Robotics and Automation Magazine 13 (2006) 54–62 [Davison03] Davison, A.: Real-time simultaneous localization and mapping with a single camera. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV ’03), Los Alamitos, CA, USA, IEEE Computer Society (2003) 1403–1410 [Bunschoten03] Bunschoten, R., Krose, B.: Visual odometry from an omnidirectional vision system. In: Proceedings of the IEEE International Conference on Robotics and Automation. Volume 1., Los Alamitos, CA, USA, IEEE Computer Society (2003) 577–583 [Corke04] Corke, P., Strelow, D., Singh, S.: Omnidirectional visual odometry for a planetary rover. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Volume 4., Los Alamitos, CA, USA, IEEE Computer Society (2004) 4007–4012 [Wang05] Wang, H., Yuan, K., Zou, W., Zhou, Q.: Visual odometry based on locally planar ground assumption. In: Proceedings of the IEEE International Conference on Information Acquisition. (2005) 6pp. [Benhimane06] Benhimane, S., Malis, E.: Homography-based 2d visual servoing. In: Proceedings of the IEEE International Conference on Robotics and Automation, Los Alamitos, CA, USA, IEEE Computer Society (2006) 2397–2402 [Bouguet00] Jean-Yves Bouguet. Pyramidal Implementation of the Lucas Kanade Feature Tracker. [Harris88] Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of the Fourth Alvey Vision Conference. (1988) 147–152 [Torr93] Torr, P.H.S., Murray, D.W.: Outlier detection and motion segmentation. In Schenker, P.S., ed.: Sensor Fusion VI, SPIE volume 2059 (1993) 432–443 Boston. [Fischler81] Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communication of ACM 24 (1981) 381–395 [Kovesi04] Kovesi, P.D.: MATLAB and Octave functions for computer vision and image processing. School of Computer Science & Software Engineering, The University of Western Australia (2004) Available from: [Lowe04] Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2004) 91–110


Download ppt "What does calibration give?"

Similar presentations


Ads by Google