3D Imaging Midterm Review.

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

Stereo Vision Reading: Chapter 11
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Computational Photography CSE 590 Tamara Berg Filtering & Pyramids.
Lecture 8: Stereo.
Last Time Pinhole camera model, projection
Midterm Review CS485/685 Computer Vision Prof. Bebis.
Announcements. Projection Today’s Readings Nalwa 2.1.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Lecture 21: Multiple-view geometry and structure from motion
Computer Vision - A Modern Approach
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
The plan for today Camera matrix
Stereo and Structure from Motion
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Lecture 35: Course review CS4670: Computer Vision Noah Snavely.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
Lec 21: Fundamental Matrix
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
Announcements PS3 Due Thursday PS4 Available today, due 4/17. Quiz 2 4/24.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Cameras, lenses, and calibration
Recap from Friday Pinhole camera model Perspective projections Lenses and their flaws Focus Depth of field Focal length and field of view.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
9 – Stereo Reconstruction
Applications of Image Filters Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/04/10.
Structure from images. Calibration Review: Pinhole Camera.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Computer Science 631 Lecture 7: Colorspace, local operations
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 01/25/11 1.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
Stereo Many slides adapted from Steve Seitz.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision, Robert Pless
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Linear filtering. Motivation: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What’s.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Imaging Motion.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
776 Computer Vision Jan-Michael Frahm Spring 2012.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
More with Pinhole + Single-view Metrology
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Grauman Today: Image Filters Smooth/Sharpen Images... Find edges... Find waldo…
Midterm Review. Tuesday, November 3 7:15 – 9:15 p.m. in room 113 Psychology Closed book One 8.5” x 11” sheet of notes on both sides allowed Bring a calculator.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
CSE 185 Introduction to Computer Vision Stereo 2.
The Camera : Computational Photography
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Announcements Final is Thursday, March 20, 10:30-12:20pm
Motion and Optical Flow
Image gradients and edges
Epipolar geometry continued
Lecture 13: Cameras and geometry
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Filtering Things to take away from this lecture An image as a function
Computer Vision Stereo Vision.
Announcements Final is Thursday, March 16, 10:30-12:20
Filtering An image as a function Digital vs. continuous images
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

3D Imaging Midterm Review

Pinhole cameras Abstract camera model - box with a small hole in it Pinhole cameras work in practice The point to make here is that each point on the image plane sees light from only one direction, the one that passes through the pinhole.

The equation of projection Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get Prove at home

The camera matrix Homogenous coordinates for 3D point are (X,Y,Z,T) Homogenous coordinates for point in image are (U,V,W)

Properties of “thin” lens (i.e., ideal lens) Light rays passing through the center are not deviated. Light rays passing through a point far away from the center are deviated more. focal point f 6

Thin lens equation (cont’d) Combining the equations(do at home): v u f And the set of all such points forms a plane parallel to the image (plane of focus). 1 u v f + = image 7

Thin lens equation (cont’d) The thin lens equation implies that only points at distance u from the lens are “in focus” (i.e., focal point lies on image plane). Other points project to a “blur circle” or “circle of confusion” in the image (i.e., blurring occurs). “circle of confusion” 8

Depth of field f / 5.6 f / 32 Changing the aperture size or focal length affects depth of field

Basic models of reflection Specular: light bounces off at the incident angle E.g., mirror Diffuse: light scatters in all directions E.g., brick, cloth, rough wood specular reflection incoming light Θ Θ diffuse reflection incoming light

Bidirectional Reflectance Distribution Function (BDRF) Model of local reflection that tells how bright a surface appears when viewed from one direction when light falls on it from another surface normal

The Retina

Two types of light-sensitive receptors Cones cone-shaped less sensitive operate in high light color vision Rods rod-shaped highly sensitive operate at night gray-scale vision © Stephen E. Palmer, 2002

Color Vision

The raster image (pixel matrix) 0.92 0.93 0.94 0.97 0.62 0.37 0.85 0.99 0.95 0.89 0.82 0.56 0.31 0.75 0.81 0.91 0.72 0.51 0.55 0.42 0.57 0.41 0.49 0.96 0.88 0.46 0.87 0.90 0.71 0.80 0.79 0.60 0.58 0.50 0.61 0.45 0.33 0.86 0.84 0.74 0.39 0.73 0.67 0.54 0.48 0.69 0.66 0.43 0.77 0.78

Color Images: Multi-chip Bulky and expensive wavelength dependent

Color Images: Bayer Grid Estimate RGB at ‘G’ cells from neighboring values Slide by Steve Seitz

HSV Color Space

Moravec corner detector Change of intensity for the shift [u,v]: Intensity Window function Shifted intensity Four shifts: (u,v) = (1,0), (1,1), (0,1), (-1, 1) Look for local maxima in min{E}

Harris Corner Detector

Image filtering 1 90 90 ones, divide by 9

Practice with linear filters - 2 1 Original Sharpening filter Accentuates differences with local average Source: D. Lowe

Other filters -1 1 -2 2 Sobel Vertical Edge (absolute value)

Important filter: Gaussian Spatially-weighted average 0.003 0.013 0.022 0.013 0.003 0.013 0.059 0.097 0.059 0.013 0.022 0.097 0.159 0.097 0.022 5 x 5,  = 1

Smoothing with Gaussian filter

Smoothing with box filter

A sum of sines Our building block: Add enough of them to get any signal f(x) you want!

Fourier analysis in images Intensity Image Fourier Image

Gaussian

Box Filter

Aliasing problem 1D example (sinewave): Source: S. Marschner

Aliasing in video

Aliasing in graphics

Subsampling without pre-filtering 1/2 1/4 (2x zoom) 1/8 (4x zoom)

Subsampling with Gaussian pre-filtering

Template matching Goal: find in image Main challenge: What is a good similarity or distance measure between two patches? Correlation Zero-mean correlation Sum Square Difference Normalized Cross Correlation

Normalized Cross Correlation mean template mean image patch Invariant to mean and scale of intensity

Matching with filters (Normalized Cross Correlation) True detections Input Thresholded Image Normalized X-Correlation

Reducing salt-and-pepper noise by Gaussian smoothing 3x3 5x5 7x7 What’s wrong with these results?

Alternative idea: Median filtering A median filter operates over a window by selecting the median intensity in the window Better at salt’n’pepper noise Not convolution: try a region with 1’s and a 2, and then 1’s and a 3 Is median filtering linear?

Salt-and-pepper noise Median filter Salt-and-pepper noise Median filtered

Effects of noise Where is the edge? Consider a single row or column of the image Plotting intensity as a function of position gives a signal Where is the edge? How to fix?

Solution: smooth first g f * g To find edges, look for peaks in Source: S. Seitz

Derivative theorem of convolution Differentiation is convolution, and convolution is associative: : f

Derivative of Gaussian filter

Final Canny Edges

Estimating Camera Parameters Alper Yilmaz, CAP5415, Fall 2004

Ames Room

Julesz: had huge impact because it showed that recognition not needed for stereo.

Epipolar Constraint

Basic Stereo Derivations Disparity:

We can always achieve this geometry with image rectification Image Reprojection reproject image planes onto common plane parallel to line between optical centers (Seitz)

Using these constraints we can use matching for stereo Improvement: match windows For each pixel in the left image For each epipolar line compare with every pixel on same epipolar line in right image pick pixel with minimum match cost This will never work, so:

Stereo matching as energy minimization D W1(i ) W2(i+D(i )) D(i ) Energy functions of this form can be minimized using graph cuts Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001

Active stereo with structured light L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002

Random Dot

Time to Collision v Can be directly measured from image L L f t t=0 Do l(t) f t t=0 Do And time to collision: D(t) Can be found, without knowing L or Do or v !!

2D Motion Field

2D Optical Flow Apparent motion of image brightness pattern

2D Motion Field and 2D Optical Flow Motion field: projection of 3D motion vectors on image plane Optical flow field: apparent motion of brightness patterns We equate motion field with optical flow field

Brightness Constancy Equation Taking derivative wrt time:

Normal Motion/Aperture Problem

Barber Pole Illusion A: u and v are unknown, 1 equation

Full 3D Rotation Any rotation can be expressed as combination of three rotations about three axes. Rows (and columns) of R are orthonormal vectors. R has determinant 1 (not -1).

Velocity Model in 2D Perspective projection