C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

Computer Vision, Robert Pless
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Monday March 21 Prof. Kristen Grauman UT-Austin Image formation.
Paris town hall.
Algorithms and Applications in Computer Vision Lihi Zelnik-Manor
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Computer Vision - A Modern Approach
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Computer Vision : CISC4/689
Calibration Dorit Moshe.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
CS4670 / 5670: Computer Vision KavitaBala Lecture 15: Projection “The School of Athens,” Raphael.
1 Digital Images WorldCameraDigitizer Digital Image (i) What determines where the image of a 3D point appears on the 2D image? (ii) What determines how.
Cameras, lenses, and calibration
Perspective projection
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Image formation How are objects in the world captured in an image?
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Camera Geometry and Calibration Thanks to Martial Hebert.
776 Computer Vision Jan-Michael Frahm Fall Camera.
Image Formation Fundamentals Basic Concepts (Continued…)
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 2 Lenses and Camera Calibration Sebastian Thrun,
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 01/25/11 1.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Single-view geometry Odilon Redon, Cyclops, 1914.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
stereo Outline : Remind class of 3d geometry Introduction
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Single-view geometry Odilon Redon, Cyclops, 1914.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
CS682, Jana Kosecka Rigid Body Motion and Image Formation Jana Kosecka
Calibrating a single camera
Prof. Adriana Kovashka University of Pittsburgh October 3, 2016
Geometric Model of Camera
CS5670: Computer Vision Lecture 9: Cameras Noah Snavely
CS 790: Introduction to Computational Vision
Lecture 13: Cameras and geometry
Geometric Camera Models
Lecturer: Dr. A.H. Abdul Hafez
Multiple View Geometry for Robotics
Announcements Midterm out today Project 1 demos.
Credit: CS231a, Stanford, Silvio Savarese
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Presentation transcript:

C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation

Administrivia We’re now in 405 Soda… New office hours: Thurs. 5-6pm, 413 Soda. I’ll decide on waitlist decisions tomorrow. Any Matlab issues yet? Roster…

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Perspective and art Use of correct perspective projection indicated in 1 st century B.C. frescoes Skill resurfaces in Renaissance: artists develop systematic methods to determine perspective projection (around ) Durer, 1525Raphael K. Grauman

Perspective projection equations 3d world mapped to 2d projection in image plane Forsyth and Ponce Camera frame Image plane Optical axis Focal length Scene / world points Scene point Image coordinates ‘’‘’ ‘’‘’

Homogeneous coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates no—division by z is nonlinear Slide by Steve Seitz

Perspective Projection Matrix divide by the third coordinate to convert back to non- homogeneous coordinates Projection is a matrix multiplication using homogeneous coordinates: Slide by Steve Seitz Complete mapping from world points to image pixel positions?

Perspective projection & calibration Perspective equations so far in terms of camera’s reference frame…. Camera’s intrinsic and extrinsic parameters needed to calibrate geometry. Camera frame K. Grauman

Perspective projection & calibration Camera frame Intrinsic: Image coordinates relative to camera  Pixel coordinates Extrinsic: Camera frame  World frame World frame World to camera coord. trans. matrix (4x4) Perspective projection matrix (3x4) Camera to pixel coord. trans. matrix (3x3) = 2D point (3x1) 3D point (4x1) K. Grauman

Intrinsic parameters: from idealized world coordinates to pixel values Forsyth&Ponce Perspective projection W. Freeman

Intrinsic parameters But “pixels” are in some arbitrary spatial units W. Freeman

Intrinsic parameters Maybe pixels are not square W. Freeman

Intrinsic parameters We don’t know the origin of our camera pixel coordinates W. Freeman

Intrinsic parameters May be skew between camera pixel axes W. Freeman

Intrinsic parameters, homogeneous coordinates Using homogenous coordinates, we can write this as: or: In camera-based coords In pixels W. Freeman

Extrinsic parameters: translation and rotation of camera frame Non-homogeneous coordinates Homogeneous coordinates W. Freeman

Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates Forsyth&Ponce Intrinsic Extrinsic World coordinates Camera coordinates pixels W. Freeman

Other ways to write the same equation pixel coordinates world coordinates Conversion back from homogeneous coordinates leads to: W. Freeman

Calibration target Find the position, u i and v i, in pixels, of each calibration object feature point.

Camera calibration So for each feature point, i, we have: From before, we had these equations relating image positions, u,v, to points at 3-d positions P (in homogeneous coordinates): W. Freeman

Camera calibration Stack all these measurements of i=1…n points into a big matrix: W. Freeman

Showing all the elements: In vector form: Camera calibration W. Freeman

We want to solve for the unit vector m (the stacked one) that minimizes Q m = 0 The minimum eigenvector of the matrix Q T Q gives us that (see Forsyth&Ponce, 3.1), because it is the unit vector x that minimizes x T Q T Q x. Camera calibration W. Freeman

Once you have the M matrix, can recover the intrinsic and extrinsic parameters as in Forsyth&Ponce, sect Camera calibration W. Freeman

Recall, perspective effects… Far away objects appear smaller Forsyth and Ponce

Perspective effects

Parallel lines in the scene intersect in the image Converge in image on horizon line Image plane (virtual) Scene pinhole

Projection properties Many-to-one: any points along same ray map to same point in image Points  ? – points Lines  ? – lines (collinearity preserved) Distances and angles are / are not ? preserved – are not Degenerate cases: – Line through focal point projects to a point. – Plane through focal point projects to line – Plane perpendicular to image plane projects to part of the image.

Weak perspective Approximation: treat magnification as constant Assumes scene depth << average distance to camera World points: Image plane

Orthographic projection Given camera at constant distance from scene World points projected along rays parallel to optical access

2D

3D

Other types of projection Lots of intriguing variants… (I’ll just mention a few fun ones) S. Seitz

360 degree field of view… Basic approach – Take a photo of a parabolic mirror with an orthographic lens (Nayar) – Or buy one a lens from a variety of omnicam manufacturers… See S. Seitz

Tilt-shift Titlt-shift images from Olivo Barbieri and Photoshop imitationsOlivo Barbieriimitations S. Seitz

tilt, shift

Tilt-shift perspective correction

normal lenstilt-shift lens

Rollout Photographs © Justin Kerr Rotating sensor (or object) Also known as “cyclographs”, “peripheral images” S. Seitz

Photofinish S. Seitz

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Pinhole size / aperture Smaller Larger How does the size of the aperture affect the image we’d get? K. Grauman

Adding a lens A lens focuses light onto the film –Rays passing through the center are not deviated –All parallel rays converge to one point on a plane located at the focal length f Slide by Steve Seitz focal point f

Pinhole vs. lens K. Grauman

Cameras with lenses focal point F optical center (Center Of Projection) A lens focuses parallel rays onto a single focal point Gather more light, while keeping focus; make pinhole perspective projection practical K. Grauman

Human eye Fig from Shapiro and Stockman Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones Rough analogy with human visual system:

Thin lens Rays entering parallel on one side go through focus on other, and vice versa. In ideal case – all rays from P imaged at P’. Left focus Right focus Focal length fLens diameter d K. Grauman

Thin lens equation Any object point satisfying this equation is in focus K. Grauman

Focus and depth of field Image credit: cambridgeincolour.com

Focus and depth of field Depth of field: distance between image planes where blur is tolerable Thin lens: scene points at distinct depths come in focus at different image planes. (Real camera lens systems have greater depth of field.) Shapiro and Stockman “circles of confusion”

Focus and depth of field How does the aperture affect the depth of field? A smaller aperture increases the range in which the object is approximately in focus Flower images from Wikipedia Slide from S. Seitz

Depth from focus [figs from H. Jin and P. Favaro, 2002] Images from same point of view, different camera parameters 3d shape / depth estimates

Field of view Angular measure of portion of 3d space seen by the camera Images from K. Grauman

As f gets smaller, image becomes more wide angle – more world points project onto the finite image plane As f gets larger, image becomes more telescopic – smaller part of the world projects onto the finite image plane Field of view depends on focal length from R. Duraiswami

Field of view depends on focal length Smaller FOV = larger Focal Length Slide by A. Efros

Vignetting

Vignetting “natural”: “mechanical”: intrusion on optical path

Chromatic aberration

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Environment map

BDRF

Diffuse / Lambertian

Foreshortening

Specular reflection

Phong Diffuse+specular+ambient:

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Digital cameras Film  sensor array Often an array of charge coupled devices Each CCD is light sensitive diode that converts photons (light energy) to electrons camera CCD array optics frame grabber computer K. Grauman

Historical context Pinhole model: Mozi ( BCE), Aristotle ( BCE) Principles of optics (including lenses): Alhacen ( CE) Camera obscura: Leonardo da Vinci ( ), Johann Zahn ( ) First photo: Joseph Nicephore Niepce (1822) Daguerréotypes (1839) Photographic film (Eastman, 1889) Cinema (Lumière Brothers, 1895) Color Photography (Lumière Brothers, 1908) Television (Baird, Farnsworth, Zworykin, 1920s) First consumer camera with CCD: Sony Mavica (1981) First fully digital camera: Kodak DCS100 (1990) Niepce, “La Table Servie,” 1822 CCD chip Alhacen’s notes Slide credit: L. Lazebnik K. Grauman

Digital Sensors

Resolution sensor: size of real world scene element a that images to a single pixel image: number of pixels Influences what analysis is feasible, affects best representation choice. [fig from Mori et al]

Digital images Think of images as matrices taken from CCD array. K. Grauman

im[176][201] has value 164 im[194][203] has value 37 width 520 j=1 500 height i=1 Intensity : [0,255] Digital images K. Grauman

Color sensing in digital cameras Source: Steve Seitz Estimate missing components from neighboring values (demosaicing) Bayer grid

RG B Color images, RGB color space K. Grauman Much more on color in next lecture…

Physical parameters of image formation Geometric – Type of projection – Camera pose Optical – Sensor’s lens type – focal length, field of view, aperture Photometric – Type, direction, intensity of light reaching sensor – Surfaces’ reflectance properties Sensor – sampling, etc.

Summary Image formation affected by geometry, photometry, and optics. Projection equations express how world points mapped to 2d image. Homogenous coordinates allow linear system for projection equations. Lenses make pinhole model practical Photometry models: Lambertian, BRDF Digital imagers, Bayer demosaicing Parameters (focal length, aperture, lens diameter, sensor sampling…) strongly affect image obtained. K. Grauman

Slide Credits Bill Freeman Steve Seitz Kristen Grauman Forsyth and Ponce Rick Szeliski and others, as marked…

` Next time Color Readings: – Forsyth and Ponce, Chapter 6 – Szeliski, 2.3.2