Image formation How are objects in the world captured in an image?

Slides:



Advertisements
Similar presentations
C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.
Advertisements

The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Monday March 21 Prof. Kristen Grauman UT-Austin Image formation.
Review Pinhole projection model What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection?
Paris town hall.
Study the mathematical relations between corresponding image points.
Algorithms and Applications in Computer Vision Lihi Zelnik-Manor
Geometry 1: Projection and Epipolar Lines Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
Projection Readings Szeliski 2.1. Projection Readings Szeliski 2.1.
Cameras and Projections
Cameras. Overview The pinhole projection model Qualitative properties Perspective projection matrix Cameras with lenses Depth of focus Field of view Lens.
Cameras (Reading: Chapter 1) Goal: understand how images are formed Camera obscura dates from 15 th century Basic abstraction is the pinhole camera Perspective.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
CS485/685 Computer Vision Prof. George Bebis
Lecture 5: Cameras and Projection CS6670: Computer Vision Noah Snavely.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 4a: Cameras CS6670: Computer Vision Noah Snavely Source: S. Lazebnik.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Today: The Camera. Overview The pinhole projection model Qualitative properties Perspective projection matrix Cameras with lenses Depth of focus Field.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
Image Formation III Chapter 1 (Forsyth&Ponce) Cameras “Lenses”
Computer Vision Cameras, lenses and sensors Cosimo Distante Introduction to Image Processing Image.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Cameras, lenses, and calibration
Perspective projection
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Building a Real Camera.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
776 Computer Vision Jan-Michael Frahm Fall Camera.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Image Formation Fundamentals Basic Concepts (Continued…)
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
COMPUTER VISION D10K-7C02 CV02: Camera Dr. Setiawan Hadi, M.Sc.CS. Program Studi S-1 Teknik Informatika FMIPA Universitas Padjadjaran.
Image Formation Dr. Chang Shu COMP 4900C Winter 2008.
776 Computer Vision Jan-Michael Frahm Fall Last class.
EECS 274 Computer Vision Cameras.
CSE 185 Introduction to Computer Vision Cameras. Camera models –Pinhole Perspective Projection –Affine Projection –Spherical Perspective Projection Camera.
Projection Readings Nalwa 2.1 Szeliski, Ch , 2.1
More with Pinhole + Single-view Metrology
Image Formation III Chapter 1 (Forsyth&Ponce) Cameras “Lenses” Guido Gerig CS-GY 6643, Spring 2016 (slides modified from Marc Pollefeys, UNC Chapel Hill/
Announcements Project 1 grading session this Thursday 2:30-5pm, Sieg 327 –signup ASAP:signup –10 minute slot to demo your project for a TA »have your program.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
CSE 185 Introduction to Computer Vision
PNU Machine Vision Lecture 2a: Cameras Source: S. Lazebnik.
CS5670: Computer Vision Lecture 9: Cameras Noah Snavely
Jan-Michael Frahm Fall 2016
The Camera : Computational Photography
Announcements Project 1 Project 2
CSE 185 Introduction to Computer Vision
CS4670 / 5670: Computer Vision Lecture 12: Cameras Noah Snavely
Study the mathematical relations between corresponding image points.
Jan-Michael Frahm Fall 2016
Lecture 13: Cameras and geometry
Lecturer: Dr. A.H. Abdul Hafez
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Projection Readings Nalwa 2.1.
Credit: CS231a, Stanford, Silvio Savarese
Projection Readings Szeliski 2.1.
Announcements Midterm out today Project 1 demos.
The Camera.
Presentation transcript:

Image formation Matlab tutorial Tuesday, Sept 2 Kristen Grauman UT-Austin

Image formation How are objects in the world captured in an image?

Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor’s lens type focal length, field of view, aperture Photometric Type, direction, intensity of light reaching sensor Surfaces’ reflectance properties

Image formation Let’s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image? Slide by Steve Seitz

Pinhole camera Add a barrier to block off most of the rays This reduces blurring The opening is known as the aperture How does this transform the image? Slide by Steve Seitz

Pinhole camera Pinhole camera is a simple model to approximate imaging process, perspective projection. Image plane Virtual image pinhole If we treat pinhole as a point, only one ray from any given point can enter the camera. Fig from Forsyth and Ponce

Camera obscura In Latin, means ‘dark room’ "Reinerus Gemma-Frisius, observed an eclipse of the sun at Louvain on January 24, 1544, and later he used this illustration of the event in his book De Radio Astronomica et Geometrica, 1545. It is thought to be the first published illustration of a camera obscura..." Hammond, John H., The Camera Obscura, A Chronicle http://www.acmi.net.au/AIC/CAMERA_OBSCURA.html

Camera obscura An attraction in the late 19th century Jetty at Margate England, 1898. An attraction in the late 19th century Around 1870s http://brightbytes.com/cosite/collection2.html Adapted from R. Duraiswami

Camera obscura at home http://blog.makezine.com/archive/2006/02/how_to_room_sized_camera_obscu.html Sketch from http://www.funsci.com/fun3_en/sky/sky.htm

Perspective effects

Perspective effects Far away objects appear smaller Forsyth and Ponce

Perspective effects

Perspective effects Parallel lines in the scene intersect in the image Converge in image on horizon line Image plane (virtual) pinhole Scene

Projection properties Many-to-one: any points along same ray map to same point in image Points  points Lines  lines (collinearity preserved) Distances and angles are not preserved Degenerate cases: – Line through focal point projects to a point. – Plane through focal point projects to line – Plane perpendicular to image plane projects to part of the image.

Perspective and art Use of correct perspective projection indicated in 1st century B.C. frescoes Skill resurfaces in Renaissance: artists develop systematic methods to determine perspective projection (around 1480-1515) Raphael Durer, 1525

Perspective projection equations 3d world mapped to 2d projection in image plane Image plane Focal length Optical axis Camera frame ‘ ’ Scene / world points Scene point Image coordinates ‘’ Forsyth and Ponce

Homogeneous coordinates Is this a linear transformation? no—division by z is nonlinear Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates Slide by Steve Seitz

Perspective Projection Matrix Projection is a matrix multiplication using homogeneous coordinates: divide by the third coordinate to convert back to non-homogeneous coordinates Complete mapping from world points to image pixel positions? Slide by Steve Seitz

Perspective projection & calibration Perspective equations so far in terms of camera’s reference frame…. Camera’s intrinsic and extrinsic parameters needed to calibrate geometry. Camera frame 19

Perspective projection & calibration World frame Extrinsic: Camera frame World frame Intrinsic: Image coordinates relative to camera  Pixel coordinates Camera frame World to camera coord. trans. matrix (4x4) Perspective projection matrix (3x4) Camera to pixel coord. trans. matrix (3x3) = 2D point (3x1) 3D point (4x1) 20

Weak perspective Approximation: treat magnification as constant Assumes scene depth << average distance to camera Image plane World points:

Orthographic projection Given camera at constant distance from scene World points projected along rays parallel to optical access

Pinhole size / aperture How does the size of the aperture affect the image we’d get? Larger Smaller

Adding a lens f A lens focuses light onto the film focal point f A lens focuses light onto the film Rays passing through the center are not deviated All parallel rays converge to one point on a plane located at the focal length f Slide by Steve Seitz

Pinhole vs. lens

(Center Of Projection) Cameras with lenses F focal point optical center (Center Of Projection) A lens focuses parallel rays onto a single focal point Gather more light, while keeping focus; make pinhole perspective projection practical

Human eye Rough analogy with human visual system: Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones Fig from Shapiro and Stockman

Thin lens Thin lens Rays entering parallel on one side go through focus on other, and vice versa. In ideal case – all rays from P imaged at P’. Left focus Right focus Lens diameter d Focal length f

Thin lens equation Any object point satisfying this equation is in focus

Focus and depth of field Image credit: cambridgeincolour.com

Focus and depth of field Depth of field: distance between image planes where blur is tolerable Thin lens: scene points at distinct depths come in focus at different image planes. (Real camera lens systems have greater depth of field.) “circles of confusion” Shapiro and Stockman

Focus and depth of field How does the aperture affect the depth of field? A smaller aperture increases the range in which the object is approximately in focus Flower images from Wikipedia http://en.wikipedia.org/wiki/Depth_of_field Slide from S. Seitz

Depth from focus Images from same point of view, different camera parameters 3d shape / depth estimates [figs from H. Jin and P. Favaro, 2002]

Field of view Angular measure of portion of 3d space seen by the camera Images from http://en.wikipedia.org/wiki/Angle_of_view

Field of view depends on focal length As f gets smaller, image becomes more wide angle more world points project onto the finite image plane As f gets larger, image becomes more telescopic smaller part of the world projects onto the finite image plane from R. Duraiswami

Field of view depends on focal length Smaller FOV = larger Focal Length Slide by A. Efros

Resolution sensor: size of real world scene element a that images to a single pixel image: number of pixels Influences what analysis is feasible, affects best representation choice. [fig from Mori et al]

Digital cameras Film  sensor array Often an array of charge coupled devices Each CCD is light sensitive diode that converts photons (light energy) to electrons camera CCD array frame grabber optics computer

Digital images Think of images as matrices taken from CCD array.

Digital images Intensity : [0,255] width 520 j=1 500 height i=1 im[176][201] has value 164 im[194][203] has value 37

Color sensing in digital cameras Bayer grid Estimate missing components from neighboring values (demosaicing) Source: Steve Seitz

Color images, RGB color space

Historical context Pinhole model: Mozi (470-390 BCE), Aristotle (384-322 BCE) Principles of optics (including lenses): Alhacen (965-1039 CE) Camera obscura: Leonardo da Vinci (1452-1519), Johann Zahn (1631-1707) First photo: Joseph Nicephore Niepce (1822) Daguerréotypes (1839) Photographic film (Eastman, 1889) Cinema (Lumière Brothers, 1895) Color Photography (Lumière Brothers, 1908) Television (Baird, Farnsworth, Zworykin, 1920s) First consumer camera with CCD: Sony Mavica (1981) First fully digital camera: Kodak DCS100 (1990) Alhacen’s notes Niepce, “La Table Servie,” 1822 Slide credit: L. Lazebnik CCD chip

Summary Image formation affected by geometry, photometry, and optics. Projection equations express how world points mapped to 2d image. Homogenous coordinates allow linear system for projection equations. Lenses make pinhole model practical. Parameters (focal length, aperture, lens diameter,…) affect image obtained.

Next Problem set 0 due Thursday turnin --submit harshd pset0 <filename> Thursday: Color Read F&P Chapter 6