Overview Pin-hole model From 3D to 2D Camera projection

Slides:



Advertisements
Similar presentations
Geometry of Aerial Photographs
Advertisements

Epipolar Geometry.
Computer Vision, Robert Pless
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2011 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Camera calibration and epipolar geometry
Single-view metrology
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
CS485/685 Computer Vision Prof. George Bebis
Image Stitching and Panoramas
Lecture 13: Projection, Part 2
Lecture 6: Image Warping and Projection CS6670: Computer Vision Noah Snavely.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2005 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
Panoramas and Calibration : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
Perspective projection
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Single-view Metrology and Cameras Computational Photography Derek Hoiem, University of Illinois 10/8/13.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
Single-view modeling, Part 2 CS4670/5670: Computer Vision Noah Snavely.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Viewing and Projections
776 Computer Vision Jan-Michael Frahm Fall Camera.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 01/25/11 1.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
CS-378: Game Technology Lecture #2.1: Projection Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Single-view Metrology and Cameras Computational Photography Derek Hoiem, University of Illinois 10/06/15.
Feature Matching. Feature Space Outlier Rejection.
Last Two Lectures Panoramic Image Stitching
More with Pinhole + Single-view Metrology
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Chapter 33 Lenses and Optical Instruments
Jan-Michael Frahm Fall 2016
The Camera : Computational Photography
PERSPECTIVE PROJECTION…...
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
CSCE 441 Computer Graphics 3-D Viewing
Scene Modeling for a Single View
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
Homogeneous Coordinates (Projective Space)
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
Epipolar geometry.
More Mosaic Madness : Computational Photography
Common Classification Tasks
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Rays, Mirrors, Lenses, and Prisms
More Mosaic Madness : Computational Photography
Lecturer: Dr. A.H. Abdul Hafez
The Camera : Computational Photography
Projective geometry Readings
Camera Calibration Coordinate change Translation P’ = P – O
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Viewing (Projections)
Announcements list – let me know if you have NOT been getting mail
Credit: CS231a, Stanford, Silvio Savarese
Camera Calibration Reading:
Announcements Project 2 extension: Friday, Feb 8
Presentation transcript:

Overview Pin-hole model From 3D to 2D Camera projection Homogeneous coordinates Camera calibration Vanishing points & lines Perspective cues Distortions

Perspective cues

Perspective cues

Perspective cues

Comparing heights Vanishing point Vanishing point

Comparing heights Vanishing point Vanishing point

Measuring heights 50 40 33 30 Camera height: 27 25 20 10 Vanishing point 40 Vanishing point Horizon line 33 30 Camera height: 27 25 20 10

The cross ratio What if we have no ruler in the scene for the measuring? Vanishing points are not enough for this task.      

The cross ratio The cross-ratio of 4 collinear points is the fundamental projection invariant of projective geometry. The points ordering can be permuted. Therefore there are 4! = 24 different orders (but only 6 distinct values!)

The cross ratio  

The cross ratio     Scene cross ratio   Image cross ratio  

The cross ratio     Vanishing point Horizon line     Vanishing point                    

The cross ratio     Vanishing point Horizon line   Vanishing point                

Overview Pin-hole model From 3D to 2D Camera projection Homogeneous coordinates Camera calibration Vanishing points & lines Perspective cues Distortions

Fronto-parallel planes What happens to the projection of a pattern on a plane parallel to the image plane? All points on that plane are at a fixed depth z The pattern gets scaled by a factor of f/z, but angles and ratios of lengths/areas are preserved

Fronto-parallel planes images from 1X.com

Perspective distortions

Architectural photography

Architectural photography When we want to take a photo of a tall building we may face a difficulty for capturing all the building in the wanted perspective. (a) Keeping the camera level, with an ordinary lens, captures only the bottom portion of the building.

Architectural photography When we want to take a photo of a tall building we may face a difficulty for capturing all the building in the wanted perspective. (b) Tilting the camera upwards results in converging verticals.

Architectural photography When we want to take a photo of a tall building we may face a difficulty for capturing all the building in the wanted perspective. (c) solution: “Shifting” the lens upwards results in a picture of the entire subject.

Architectural photography When we want to take a photo of a tall building we may face a difficulty for capturing all the building in the wanted perspective. (c) solution: “Shifting” the lens upwards results in a picture of the entire subject. So how can we actually “shift” the lens of the camera for solving the problem?

Architectural photography The solutions: View camera – used by photographers for controlling focus and convergence of parallel lines. Image control is done by moving the front and/or rear standards.

Architectural photography The solutions: Tilt-Shift lens – a modern cameras solving this problem. Disadvantages: Tilting around the vertical axis resulted in a very small region in which objects appear sharp.

Tilt-Shift photography

Tilt-Shift photography

Tilt-Shift photography

Architectural photography The solutions: Homographic projections …

Points on the plane

Points on the plane In many cases, such as the perspective distortions problem, we can consider some points in the image as on the same plane in the 3D world. Using an appropriate coordinates system for the scene, we can say all these points on the plane satisfy the fact that: Z = 0 .  

Points on the plane

Points on the plane

Points on the plane

Points on the plane

Planar homography We got a 3 × 3 matrix, appropriate for the cases of points in the scene that lies on the same plane. We call this matrix Homography Matrix. Once again the scale factor is arbitrary and ignorable. Only 8 unique numbers left to be determined in the homography matrix. Can be estimated from (at least) 4 world points and their corresponding image points.

Planar homography       Pseudo Inverse

Planar homography       Pseudo Inverse

Planar homography

Planar homography

Planar homography

Planar homography

Planar homography

Demo time!

The effect of virtual camera Using the Homography matrix for dealing perspective distortions and other problematic cameras’ positions may be seen as projection of the handheld camera in the reality into a virtual camera which roles as a camera in which the result image of the homography is taken. This virtual camera’s position is suitable for the actual perspective that we want to see the objects in the image.

The effect of virtual camera Handled Camera Virtual Camera  

Planar homography - Some problems… In most interesting images and cases, not all the points in the image are on the same plane. By wrapping the image according to the homography matrix, created for a specific plane, some distortions may occur regarding the these points, and especially regarding edges in the image which connecting two planes in the world.

Planar homography - more uses

Planar homography - more uses

More perspective distortions The problem pointed initially by Da Vinci: When we project some equal flat vertical objects, standing on a line parallel to the image plane, all these objects looks with the same length also in the projected 2D image. But if we use equal vertical cylindrical objects instead of the flat ones, under the same conditions, we notice that the exterior objects appear bigger. Important to note that this distortion has nothing to do with lens flaws.

More perspective distortions

Lens distortions Sometimes distortions in the images may stem from imperfect camera’s lens. So far all the imaging models assume that cameras obeys a linear projection model, where straight lines in the world result in straight lines in the image. Unfortunately, many wide-angle lenses have noticeable radial distortion. Such derivations from the anticipated output images are most noticeable for rays that pass through the edge of the lens.

Lens distortions

Pin-cushion distortion Lens distortions The coordinates in the observed images are displayed away from (Barrel) or towards (Pin-cushion) the image center by an amount proportional to their radial distance. Barrel distortion Pin-cushion distortion

Lens distortions

Lens distortions Let be the pixel coordinates obtained after perspective division (from the homogeneous coordinates , but before scaling by focal length 𝑓 and shifting the optical centre ).

𝜅1 and 𝜅2 are called the radial distortion parameters. Lens distortions Apply radial distortion according to the quartic model. 𝜅1 and 𝜅2 are called the radial distortion parameters.

Lens distortions Compute the final pixel coordinates by applying focal length and translate image center. A variety of techniques can be used to estimate the radial distortion parameters for a given lens: The simplest and most useful is to take an image of a scene with a lot of straight lines aligned with and near edges of the image. Then the parameters can be adjusted until all the lines in the image becomes straight. Another approach is to use several overlapping images and to combine the estimation of the parameters with the image alignment process which involves a quadratic radial distortion correction term.

Lens distortions - some exceptions Fish-eye lens – produce strong visual distortion intended to create a wide panoramic or hemispherical image.

Lens distortions - some exceptions Anamorphic lenses – widely used in feature file production in order a wider range of aspect ratios could fit within a standard image sensor. These lens do not follow the radial distortion model we suggest. Instead, they can be thought of as inducing different vertical and horizontal scaling, i.e. non-square pixels. images from http://www.red.com/learn/red-101/anamorphic-lenses

References Szeliski: Ch. 2.1.3-2.1.6, 6.2, 6.3 cs.cornell.edu: http://www.cs.cornell.edu/courses/cs4670/2013fa/lectures/lec13_projection.pdf http://www.cs.cornell.edu/courses/cs4670/2013fa/lectures/lec14_projection.pdf http://www.cs.cornell.edu/courses/cs4670/2013fa/lectures/lec16_svm.pdf http://www.cs.cornell.edu/courses/cs4670/2013fa/lectures/lec17_svm2.pdf csail.mit.edu: http://people.csail.mit.edu/torralba/courses/6.869/6.869.computervision.htm cs.haifa.ac.il/hagit: http://cs.haifa.ac.il/hagit/courses/CP/Lectures/CP11_SingleViewX4.pdf https://www.youtube.com/watch?v=4-thTdR7Blg https://www.youtube.com/watch?v=GVmbfyBlods&spfreload=10 https://www.youtube.com/watch?v=fVJeJMWZcq8&spfreload=10