Announcements Project 1 Project 2 Due Wednesday at 11:59pm

Slides:



Advertisements
Similar presentations
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Advertisements

The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
The Camera CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2014.
Projection Readings Szeliski 2.1. Projection Readings Szeliski 2.1.
Image formation and cameras CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Modeling Light : Rendering and Image Processing Alexei Efros.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
Lecture 5: Cameras and Projection CS6670: Computer Vision Noah Snavely.
Camera model Relation between pixels and rays in space ?
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 13: Projection, Part 2
Lecture 4a: Cameras CS6670: Computer Vision Noah Snavely Source: S. Lazebnik.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Lecture 6: Image Warping and Projection CS6670: Computer Vision Noah Snavely.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
CS4670 / 5670: Computer Vision KavitaBala Lecture 15: Projection “The School of Athens,” Raphael.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
CSCE 641: Computer Graphics Image Formation & Plenoptic Function Jinxiang Chai.
Cameras, lenses, and calibration
Image formation and cameras
Perspective projection
Cameras CSE 455, Winter 2010 January 25, 2010.
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Building a Real Camera.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Design of photographic lens Shinsaku Hiura Osaka University.
776 Computer Vision Jan-Michael Frahm Fall Camera.
CPSC 641: Computer Graphics Image Formation Jinxiang Chai.
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Geometric Camera Models
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
CSE 185 Introduction to Computer Vision Cameras. Camera models –Pinhole Perspective Projection –Affine Projection –Spherical Perspective Projection Camera.
Projection Readings Nalwa 2.1 Szeliski, Ch , 2.1
More with Pinhole + Single-view Metrology
Image Formation and Cameras CSE 455 Linda Shapiro 1.
Announcements Project 1 grading session this Thursday 2:30-5pm, Sieg 327 –signup ASAP:signup –10 minute slot to demo your project for a TA »have your program.
Example: warping triangles Given two triangles: ABC and A’B’C’ in 2D (12 numbers) Need to find transform T to transfer all pixels from one to the other.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
CSE 185 Introduction to Computer Vision
PNU Machine Vision Lecture 2a: Cameras Source: S. Lazebnik.
Computer vision: models, learning and inference
Cameras and Stereo CSE 455 Linda Shapiro.
CS5670: Computer Vision Lecture 9: Cameras Noah Snavely
Jan-Michael Frahm Fall 2016
The Camera : Computational Photography
Announcements Project 1 Project 2
CSE 185 Introduction to Computer Vision
CSCE 441 Computer Graphics 3-D Viewing
CS 790: Introduction to Computational Vision
CS4670 / 5670: Computer Vision Lecture 12: Cameras Noah Snavely
Lecture 13: Cameras and geometry
Geometric Camera Models
EECS 274 Computer Vision Cameras.
Lecturer: Dr. A.H. Abdul Hafez
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Projection Readings Nalwa 2.1.
Credit: CS231a, Stanford, Silvio Savarese
Projection Readings Szeliski 2.1.
Announcements Midterm out today Project 1 demos.
The Camera.
Presentation transcript:

Announcements Project 1 Project 2 Due Wednesday at 11:59pm Signup by end of day today https://norfolk.cs.washington.edu/htbin-php/gtng/gtng.php 1

Projection Readings Nalwa 2.1 http://www.julianbeever.net/pave.htm Anamorphic illusions. Drawn with special distortions such that when seen from one particular viewpoint it creates the impression of 3D. http://www.julianbeever.net/pave.htm Readings Nalwa 2.1

Projection http://www.julianbeever.net/pave.htm Readings Nalwa 2.1

Müller-Lyer Illusion by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html

Image formation Let’s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image?

Pinhole camera Add a barrier to block off most of the rays 1) It gets inverted 2) the size of the aperture is important. In general, we’d like to have very small aperture. Add a barrier to block off most of the rays This reduces blurring The opening known as the aperture How does this transform the image?

Pinhole cameras everywhere During solar eclipse , the tiny gaps between tree leaves act as pinhole lenses projecting crescent shaped images of the eclipsed sun onto the world below. Tree shadow during a solar eclipse photo credit: Nils van der Burg http://www.physicstogo.org/index.cfm

Camera Obscura Basic principle known to Mozi (470-390 BC), Aristotle (384-322 BC) Drawing aid for artists: described by Leonardo da Vinci (1452-1519) Gemma Frisius, 1558

Camera Obscura The first camera How does the aperture size affect the image?

Shrinking the aperture Why not make the aperture as small as possible? Less light gets through Diffraction effects...

Shrinking the aperture Diffraction effects – bending of light rays around edges of opaque objects. The smaller the aperture width relative to wavelength of the incident light the more pronounced is the diffraction. (image blurring, decrease in image intensity)

Adding a lens A lens focuses light onto the film To still use pinhole geometry but without having to work with small apertures. “circle of confusion” A lens focuses light onto the film There is a specific distance at which objects are “in focus” other points project to a “circle of confusion” in the image Changing the shape of the lens changes this distance

(Center Of Projection) Lenses F focal point optical center (Center Of Projection) A lens focuses parallel rays onto a single focal point focal point at a distance f beyond the plane of the lens f is a function of the shape and index of refraction of the lens Aperture of diameter D restricts the range of rays aperture may be on either side of the lens Lenses are typically spherical (easier to produce)

Thin lenses Thin lens equation: Any object point satisfying this equation is in focus What is the shape of the focus region? How can we change the focus region? Thin lens applet: http://www.phy.ntnu.edu.tw/java/Lens/lens_e.html (by Fu-Kwun Hwang )

Depth of field Changing the aperture size affects depth of field Trade off- smaller aperture  larger depth of field. Reduction in image intensity Maybe need to use longer exposure time for image sensing. Can introduce motion blur. Aperture adjustment mechanism in cameras. f / 32 Changing the aperture size affects depth of field A smaller aperture increases the range in which the object is approximately in focus Flower images from Wikipedia http://en.wikipedia.org/wiki/Depth_of_field

The eye The human eye is a camera The human eye forms inverted image of the scene on its retina. The retina encodes the image and transfers it to the brain. Note that the retina is curved The human eye is a camera Iris - colored annulus with radial muscles Pupil - the hole (aperture) whose size is controlled by the iris What’s the “film”? photoreceptor cells (rods and cones) in the retina

Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons other variants exist: CMOS is becoming more popular http://electronics.howstuffworks.com/digital-camera.htm

Issues with digital cameras Noise big difference between consumer vs. SLR-style cameras low light is where you most notice noise Compression creates artifacts except in uncompressed formats (tiff, raw) Color color fringing artifacts from Bayer patterns Blooming charge overflowing into neighboring pixels In-camera processing oversharpening can produce halos Interlaced vs. progressive scan video even/odd rows from different exposures Are more megapixels better? requires higher quality lens noise issues Stabilization compensate for camera shake (mechanical vs. electronic) More info online, e.g., http://electronics.howstuffworks.com/digital-camera.htm http://www.dpreview.com/

Modeling projection The coordinate system This way the image is right-side-up The coordinate system We will use the pin-hole model as an approximation Put the optical center (Center Of Projection) at the origin Put the image plane (Projection Plane) in front of the COP Why? The camera looks down the negative z axis we need this if we want right-handed-coordinates

Modeling projection Projection equations Compute intersection with PP of ray from (x,y,z) to COP Derived using similar triangles (on board) We get the projection by throwing out the last coordinate:

Homogeneous coordinates Is this a linear transformation? Trick: add one more coordinate: no—division by z is nonlinear Intuition: a single point in 3D is represented by a line in 4D… homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates

Perspective Projection Projection is a matrix multiply using homogeneous coordinates: divide by third coordinate This is known as perspective projection The matrix is the projection matrix Can also formulate as a 4x4 (today’s reading does this) divide by fourth coordinate

Perspective Projection How does scaling the projection matrix change the transformation?

Orthographic projection Special case of perspective projection Distance from the COP to the PP is infinite Good approximation for telephoto optics Also called “parallel projection”: (x, y, z) → (x, y) What’s the projection matrix? Image World Object dimensions are small compared to the distance of the object from the center of projection.

Orthographic (“telecentric”) lenses Navitar telecentric zoom lens Large focal length (small aperture), small fields of view. http://www.lhup.edu/~dsimanek/3d/telecent.htm

Orthographic projection

Perspective projection

Camera parameters How many numbers do we need to describe a camera? We need to describe its pose in the world We need to describe its internal parameters

A Tale of Two Coordinate Systems v w u x y z COP Camera o Two important coordinate systems: 1. World coordinate system 2. Camera coordinate system “The World”

Camera parameters To project a point (x,y,z) in world coordinates into a camera First transform (x,y,z) into camera coordinates Need to know Camera position (in world coordinates) Camera orientation (in world coordinates) Then project into the image plane Need to know camera intrinsics These can all be described with matrices

Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world coords Rotation R of the image plane focal length f, principle point (x’c, y’c), pixel size (sx, sy) blue parameters are called “extrinsics,” red are “intrinsics” Projection equation The projection matrix models the cumulative effect of all parameters Useful to decompose into a series of operations projection intrinsics rotation translation identity matrix The definitions of these parameters are not completely standardized especially intrinsics—varies from one book to another

Extrinsics How do we get the camera to “canonical form”? (Center of projection at the origin, x-axis points right, y-axis points up, z-axis points backwards) Step 1: Translate by -c

Extrinsics How do we get the camera to “canonical form”? (Center of projection at the origin, x-axis points right, y-axis points up, z-axis points backwards) Step 1: Translate by -c How do we represent translation as a matrix multiplication?

Extrinsics How do we get the camera to “canonical form”? (Center of projection at the origin, x-axis points right, y-axis points up, z-axis points backwards) Step 1: Translate by -c Step 2: Rotate by R 3x3 rotation matrix

Extrinsics How do we get the camera to “canonical form”? (Center of projection at the origin, x-axis points right, y-axis points up, z-axis points backwards) Step 1: Translate by -c Step 2: Rotate by R

Perspective projection (converts from 3D rays in camera coordinate system to pixel coordinates) (intrinsics) in general, (upper triangular matrix) : aspect ratio (1 unless pixels are not square) : skew (0 unless pixels are shaped like rhombi/parallelograms) : principal point ((0,0) unless optical axis doesn’t intersect projection plane at origin)

Focal length Can think of as “zoom” Related to field of view 24mm 50mm Small focal length means larger field of view (wide-angle lenses). 24mm 50mm 200mm 800mm

Projection matrix intrinsics projection rotation translation (t in book’s notation)

Projection matrix = (in homogeneous image coordinates)

Distortion Radial distortion of the image Caused by imperfect lenses No distortion Pin cushion Barrel Many wide –angle lenses have radial distortion – visible curvature in the projection of straight lines. To create good reconstructions / panoramas etc. this distortion shouldbe taken into account. Barrel usually a product of wide angle lenses and pincushion usually in telephoto lenses. Radial distortion of the image Caused by imperfect lenses Deviations are most noticeable for rays that pass through the edge of the lens

Correcting radial distortion from Helmut Dersch

http://blog.photoshopcreative.co.uk/general/fix-barrel-distortion/

Distortion

Modeling distortion To model lens distortion Project to “normalized” image coordinates Apply radial distortion K1, k2 are radial distortion parameters. For a given lens. Apply focal length translate image center To model lens distortion Use above projection operation instead of standard projection matrix multiplication

Many other types of projection exist...

360 degree field of view… Basic approach Take a photo of a parabolic mirror with an orthographic lens (Nayar) Or buy one a lens from a variety of omnicam manufacturers… See http://www.cis.upenn.edu/~kostas/omni.html

Tilt-shift images from Olivo Barbieri and Photoshop imitations Simulates a miniature scene http://www.northlight-images.co.uk/article_pages/tilt_and_shift_ts-e.html Tilt-shift images from Olivo Barbieri and Photoshop imitations

Rotating sensor (or object) Rollout Photographs © Justin Kerr http://research.famsi.org/kerrmaya.html Also known as “cyclographs”, “peripheral images”

Photofinish