Computer Vision Cameras, lenses and sensors Marc Pollefeys COMP 256.

Slides:



Advertisements
Similar presentations
Image formation. Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements –Geometry –Radiometry.
Advertisements

CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Foundations of Physics
Chapter 23 Mirrors and Lenses.
Chapter 23 Mirrors and Lenses. Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Geometric Optics of thick lenses and Matrix methods
Advanced Computer Vision Cameras, Lenses and Sensors.
Cameras (Reading: Chapter 1) Goal: understand how images are formed Camera obscura dates from 15 th century Basic abstraction is the pinhole camera Perspective.
Image Formation and Optics
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Imaging Real worldOpicsSensor Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
CS485/685 Computer Vision Prof. George Bebis
Computer Vision - A Modern Approach
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Computer Vision : CISC4/689
Sebastian Thrun & Jana Kosecka CS223B Computer Vision, Winter 2007 Stanford CS223B Computer Vision, Winter 2006 Lecture 2: Lenses and Vision Software.
Design Realization lecture 26 John Canny 11/25/03.
Lecture 4a: Cameras CS6670: Computer Vision Noah Snavely Source: S. Lazebnik.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
1 CS 223-B Lecture 1 Sebastian Thrun Gary Bradski CORNEA AQUEOUS HUMOR.
Camera models and calibration Read tutorial chapter 2 and 3.1 Szeliski’s book pp
Image Formation III Chapter 1 (Forsyth&Ponce) Cameras “Lenses”
Computer Vision Cameras, lenses and sensors Cosimo Distante Introduction to Image Processing Image.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Cameras, lenses, and calibration
Ch 25 1 Chapter 25 Optical Instruments © 2006, B.J. Lieb Some figures electronically reproduced by permission of Pearson Education, Inc., Upper Saddle.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Cameras CSE 455, Winter 2010 January 25, 2010.
Chapter 17 Optics 17.1 Reflection and Refraction
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Building a Real Camera.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
Design of photographic lens Shinsaku Hiura Osaka University.
Chapter 23 Mirrors and Lenses.
776 Computer Vision Jan-Michael Frahm Fall Camera.
Image Formation Fundamentals Basic Concepts (Continued…)
Geometric Models & Camera Calibration
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 2 Lenses and Camera Calibration Sebastian Thrun,
1 Computer Vision 一目 了然 一目 了然 一看 便知 一看 便知 眼睛 頭腦 眼睛 頭腦 Computer = Image + Artificial Vision Processing Intelligence Vision Processing Intelligence.
Image Formation Dr. Chang Shu COMP 4900C Winter 2008.
Background Research. material selection for launching of telescope; must be soft, able to absorb vibration, fit within the appropriate temperature range.
776 Computer Vision Jan-Michael Frahm Fall Last class.
EECS 274 Computer Vision Cameras.
Miguel Tavares Coimbra
CSE 185 Introduction to Computer Vision Cameras. Camera models –Pinhole Perspective Projection –Affine Projection –Spherical Perspective Projection Camera.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Instructor: Mircea Nicolescu Lecture 3 CS 485 / 685 Computer Vision.
Cameras, lenses, and sensors 6.801/6.866 Profs. Bill Freeman and Trevor Darrell Sept. 10, 2002 Reading: Chapter 1, Forsyth & Ponce Optional: Section 2.1,
Image Formation III Chapter 1 (Forsyth&Ponce) Cameras “Lenses” Guido Gerig CS-GY 6643, Spring 2016 (slides modified from Marc Pollefeys, UNC Chapel Hill/
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
Spherical Aberration. Rays emanating from an object point that are incident on a spherical mirror or lens at different distances from the optical axis,
Refraction & Lenses. Refraction of Light When a ray of light traveling through a transparent medium encounters a boundary leading into another transparent.
CSE 185 Introduction to Computer Vision
Acquisition of Images.
Jan-Michael Frahm Fall 2016
The Camera : Computational Photography
CSE 185 Introduction to Computer Vision
EECS 274 Computer Vision Cameras.
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Projection Readings Nalwa 2.1.
Credit: CS231a, Stanford, Silvio Savarese
Announcements Midterm out today Project 1 demos.
The Camera.
Presentation transcript:

Computer Vision Cameras, lenses and sensors Marc Pollefeys COMP 256

Computer Vision Camera Models – Pinhole Perspective Projection – Affine Projection Camera with Lenses Sensing The Human Eye Reading: Chapter 1. Cameras, lenses and sensors

Computer Vision Images are two-dimensional patterns of brightness values. They are formed by the projection of 3D objects. Figure from US Navy Manual of Basic Optics and Optical Instruments, prepared by Bureau of Naval Personnel. Reprinted by Dover Publications, Inc., 1969.

Computer Vision Animal eye: a looonnng time ago. Pinhole perspective projection: Brunelleschi, XV th Century. Camera obscura: XVI th Century. Photographic camera: Niepce, 1816.

Computer Vision Distant objects appear smaller

Computer Vision Parallel lines meet vanishing point

Computer Vision Vanishing points VPL VPR H VP 1 VP 2 VP 3 To different directions correspond different vanishing points

Computer Vision Geometric properties of projection Points go to points Lines go to lines Planes go to whole image or half-plane Polygons go to polygons Degenerate cases: –line through focal point yields point –plane through focal point yields line

Computer Vision Pinhole Perspective Equation

Computer Vision Affine projection models: Weak perspective projection is the magnification. When the scene relief is small compared its distance from the Camera, m can be taken constant: weak perspective projection.

Computer Vision Affine projection models: Orthographic projection When the camera is at a (roughly constant) distance from the scene, take m=1.

Computer Vision Planar pinhole perspective Orthographic projection Spherical pinhole perspective

Computer Vision Limits for pinhole cameras

Computer Vision Camera obscura + lens 

Computer Vision Lenses Snell’s law n 1 sin  1 = n 2 sin  2 Descartes’ law

Computer Vision Paraxial (or first-order) optics Snell’s law: n 1 sin  1 = n 2 sin  2 Small angles: n 1  1  n 2  2

Computer Vision Thin Lenses spherical lens surfaces; incoming light  parallel to axis; thickness << radii; same refractive index on both sides

Computer Vision Thin Lenses

Computer Vision Thick Lens

Computer Vision The depth-of-field 

Computer Vision The depth-of-field  yields Similar formula for

Computer Vision The depth-of-field decreases with d, increases with Z 0  strike a balance between incoming light and sharp depth range

Computer Vision Deviations from the lens model 3 assumptions : 1. all rays from a point are focused onto 1 image point 2. all image points in a single plane 3. magnification is constant deviations from this ideal are aberrations 

Computer Vision Aberrations chromatic : refractive index function of wavelength 2 types : 1. geometrical 2. chromatic geometrical : small for paraxial rays  study through 3 rd order optics

Computer Vision Geometrical aberrations q spherical aberration q astigmatism q distortion q coma aberrations are reduced by combining lenses 

Computer Vision Spherical aberration rays parallel to the axis do not converge outer portions of the lens yield smaller focal lenghts 

Computer Vision Astigmatism Different focal length for inclined rays

Computer Vision Distortion magnification/focal length different for different angles of inclination Can be corrected! (if parameters are know) pincushion (tele-photo) barrel (wide-angle)

Computer Vision Coma point off the axis depicted as comet shaped blob

Computer Vision Chromatic aberration rays of different wavelengths focused in different planes cannot be removed completely sometimes achromatization is achieved for more than 2 wavelengths 

Computer Vision Lens materials reference wavelengths : F = nm d = nm C = nm lens characteristics : 1. refractive index n d 2. Abbe number V d = ( n d - 1) / ( n F - n C ) typically, both should be high allows small components with sufficient refraction notation : e.g. glass BK7(517642) n d = and V d = 64.2 

Computer Vision Lens materials additional considerations : humidity and temperature resistance, weight, price,...  Crown Glass Fused Quartz & Fused Silica Plastic (PMMA) Calcium Fluoride Saphire Zinc Selenide Germanium WAVELENGTH (nm)

Computer Vision Vignetting Figure from

Computer Vision Photographs (Niepce, “La Table Servie,” 1822) Milestones: Daguerreotypes (1839) Photographic Film (Eastman,1889) Cinema (Lumière Brothers,1895) Color Photography (Lumière Brothers, 1908) Television (Baird, Farnsworth, Zworykin, 1920s) CCD Devices (1970) more recently CMOS Collection Harlingue-Viollet.

Computer Vision Cameras we consider 2 types :  1. CCD 2. CMOS

Computer Vision CCD separate photo sensor at regular positions no scanning charge-coupled devices (CCDs) area CCDs and linear CCDs 2 area architectures : interline transfer and frame transfer photosensitive storage 

Computer Vision The CCD camera

Computer Vision CMOS Same sensor elements as CCD Each photo sensor has its own amplifier More noise (reduced by subtracting ‘black’ image) Lower sensitivity (lower fill rate) Uses standard CMOS technology Allows to put other components on chip ‘Smart’ pixels Foveon 4k x 4k sensor 0.18  process 70M transistors

Computer Vision CCD vs. CMOS Mature technology Specific technology High production cost High power consumption Higher fill rate Blooming Sequential readout Recent technology Standard IC technology Cheap Low power Less sensitive Per pixel amplification Random pixel access Smart pixels On chip integration with other components

Computer Vision Color cameras We consider 3 concepts: 1.Prism (with 3 sensors) 2.Filter mosaic 3.Filter wheel … and X3

Computer Vision Prism color camera Separate light in 3 beams using dichroic prism Requires 3 sensors & precise alignment Good color separation

Computer Vision Prism color camera

Computer Vision Filter mosaic Coat filter directly on sensor Demosaicing (obtain full colour & full resolution image)

Computer Vision Filter wheel Rotate multiple filters in front of lens Allows more than 3 colour bands Only suitable for static scenes

Computer Vision Prism vs. mosaic vs. wheel Wheel 1 Good Average Low Motion 3 or more approach # sensors Separation Cost Framerate Artefacts Bands Prism 3 High Low 3 High-end cameras Mosaic 1 Average Low High Aliasing 3 Low-end cameras Scientific applications

Computer Vision new color CMOS sensor Foveon’s X3 better image quality smarter pixels

Computer Vision The Human Eye Helmoltz’s Schematic Eye Reproduced by permission, the American Society of Photogrammetry and Remote Sensing. A.L. Nowicki, “Stereoscopy.” Manual of Photogrammetry, Thompson, Radlinski, and Speert (eds.), third edition, 1966.

Computer Vision The distribution of rods and cones across the retina Reprinted from Foundations of Vision, by B. Wandell, Sinauer Associates, Inc., (1995).  1995 Sinauer Associates, Inc. Cones in the fovea Rods and cones in the periphery Reprinted from Foundations of Vision, by B. Wandell, Sinauer Associates, Inc., (1995).  1995 Sinauer Associates, Inc.

Computer Vision Geometric camera model (Man Drawing a Lute, woodcut, 1525, Albrecht Dürer) perspective projection

Computer Vision Models for camera projection the pinhole model revisited : center of the lens = center of projection notice the virtual image plane this is called perspective projection 

Computer Vision Perspective projection q origin lies at the center of projection  the Z c axis coincides with the optical axis  X c -axis  to image rows, Y c -axis  to columns  YcYc ZcZc XcXc v u

Computer Vision Pseudo-orthographic projection If Z is constant  x= kX and y = kY, where k=f/Z i.e. orthographic projection + a scaling Good approximation if ƒ/ Z ± constant, i.e. if objects are small compared to their distance from the camera 

Computer Vision Pictoral comparison  Pseudo - orthographic Perspective

Computer Vision Projection matrices the perspective projection model is incomplete : what if : 1. 3D coordinates are specified in a world coordinate frame 2. Image coordinates are expressed as row and column numbers We will not consider additional refinements, such as radial distortions,... 

Computer Vision Projection matrices X Z Y 0 v u (u,v) C (X,Y,Z) r1r1 r2r2 r3r3 

Computer Vision  (x 0, y 0 ) the pixel coordinates of the principal point  f x the number of pixels per unit length horizontally  f y the number of pixels per unit length vertically  s indicates the skew ; typically s = 0 NB7 : fully calibrated means internally and externally calibrated Projection matrices Image coordinates are to be expressed as pixel coordinates with : NB1: often only integer pixel coordinates matter NB2 : k y /k x is called the aspect ratio NB3 : k x,k y,s,x 0 and y 0 are called internal camera parameters NB4 : when they are known, the camera is internally calibrated NB5 : vector C and matrix R  SO (3) are the external camera parameters NB6 : when these are known, the camera is externally calibrated x y m n 

Computer Vision Projection matrices Exploiting homogeneous coordinates : We also have  We define the calibration matrix :

Computer Vision Projection matrices We define yielding for some non-zero   or, with rank M = 3  or,

Computer Vision Next class Radiometry: lights and surfaces