Lecture 35: Course review CS4670: Computer Vision Noah Snavely.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Lecture 11: Two-view geometry
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Lecture 8: Geometric transformations CS4670: Computer Vision Noah Snavely.
Lecture 2: Edge detection and resampling
Lecture 4: Feature matching
Lecture 7: Image Alignment and Panoramas CS6670: Computer Vision Noah Snavely What’s inside your fridge?
Lecture 21: Multiple-view geometry and structure from motion
Feature extraction: Corners and blobs
Lecture 3: Edge detection, continued
Lecture 9: Image alignment CS4670: Computer Vision Noah Snavely
Lecture 10: Cameras CS4670: Computer Vision Noah Snavely Source: S. Lazebnik.
Lecture 6: Image Warping and Projection CS6670: Computer Vision Noah Snavely.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Lecture 2: Image filtering
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lec 21: Fundamental Matrix
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Automatic Camera Calibration
Image Stitching Ali Farhadi CSE 455
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
What we didn’t have time for CS664 Lecture 26 Thursday 12/02/04 Some slides c/o Dan Huttenlocher, Stefano Soatto, Sebastian Thrun.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Mosaics Today’s Readings Szeliski, Ch 5.1, 8.1 StreetView.
Why is computer vision difficult?
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Lecture 36: Course review CS4670 / 5670: Computer Vision Noah Snavely.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Mosaics part 3 CSE 455, Winter 2010 February 12, 2010.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
Local features: detection and description
Midterm Review. Tuesday, November 3 7:15 – 9:15 p.m. in room 113 Psychology Closed book One 8.5” x 11” sheet of notes on both sides allowed Bring a calculator.
CS559: Computer Graphics Lecture 7: Image Warping and Morphing Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Lecture 15: Transforms and Alignment CS4670/5670: Computer Vision Kavita Bala.
Lecture 10: Harris Corner Detector CS4670/5670: Computer Vision Kavita Bala.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Lecture 10: Image alignment CS4670/5760: Computer Vision Noah Snavely
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
TP12 - Local features: detection and description
Announcements Final is Thursday, March 20, 10:30-12:20pm
Lecture 4: Harris corner detection
Lecture 7: Image alignment
Feature description and matching
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Filtering Things to take away from this lecture An image as a function
Announcements Final is Thursday, March 16, 10:30-12:20
CS4670: Intro to Computer Vision
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Lecture 5: Feature invariance
Filtering An image as a function Digital vs. continuous images
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Lecture 35: Course review CS4670: Computer Vision Noah Snavely

Announcements In-class exam on Friday Cumulative Open-book / open-note

Segmentation by min cut t s min cut The partitions S and T formed by the min cut give the optimal foreground and background segmentation I.e., the resulting labels minimize

Topics – image processing Filtering Edge detection Image resampling / aliasing / interpolation Feature detection – Harris corners – SIFT – Invariant features Feature matching

Topics – 2D geometry Image transformations Image alignment / least squares RANSAC Panoramas Optical flow

Topics – 3D geometry Cameras Perspective projection Single-view modeling Stereo Two-view geometry (F-matrices, E-matrices) Structure from motion Multi-view stereo

Topics – recognition Skin detection / probabilistic modeling Eigenfaces Bag-of-words models Segmentation / graph cuts

Topics – Light, reflectance, cameras Light, BRDFS Photometric stereo Computational photography

Image Processing

Linear filtering One simple version: linear filtering (cross-correlation, convolution) – Replace each pixel by a linear combination of its neighbors The prescription for the linear combination is called the “kernel” (or “mask”, “filter”) kernel 8 Modified image data Source: L. Zhang Local image data

Convolution Same as cross-correlation, except that the kernel is “flipped” (horizontally and vertically) Convolution is commutative and associative This is called a convolution operation:

Gaussian Kernel Source: C. Rasmussen

The gradient points in the direction of most rapid increase in intensity Image gradient The gradient of an image: The edge strength is given by the gradient magnitude: The gradient direction is given by: how does this relate to the direction of the edge? Source: Steve Seitz

Derivative of Gaussian filter x-direction y-direction

Finding edges gradient magnitude

thinning (non-maximum suppression) Finding edges

Image sub-sampling 1/4 (2x zoom) 1/8 (4x zoom) Why does this look so crufty? 1/2 Source: S. Seitz

Subsampling with Gaussian pre-filtering G 1/4G 1/8Gaussian 1/2 Solution: filter the image, then subsample Source: S. Seitz

Image interpolation “Ideal” reconstruction Nearest-neighbor interpolation Linear interpolation Gaussian reconstruction Source: B. Curless

Image interpolation Nearest-neighbor interpolationBilinear interpolationBicubic interpolation Original image: x 10

The surface E(u,v) is locally approximated by a quadratic form. The second moment matrix

The Harris operator min is a variant of the “Harris operator” for feature detection The trace is the sum of the diagonals, i.e., trace(H) = h 11 + h 22 Very similar to min but less expensive (no square root) Called the “Harris Corner Detector” or “Harris Operator” Lots of other detectors, this is one of the most popular

Laplacian of Gaussian “Blob” detector Find maxima and minima of LoG operator in space and scale * = maximum minima

Scale-space blob detector: Example

f1f1 f2f2 f2'f2' Feature distance How to define the difference between two features f 1, f 2 ? Better approach: ratio distance = ||f 1 - f 2 || / || f 1 - f 2 ’ || f 2 is best SSD match to f 1 in I 2 f 2 ’ is 2 nd best SSD match to f 1 in I 2 gives large values for ambiguous matches I1I1 I2I2

2D Geometry

Parametric (global) warping Transformation T is a coordinate-changing machine: p’ = T(p) What does it mean that T is global? – Is the same for any point p – can be described by just a few numbers (parameters) Let’s consider linear xforms (can be represented by a 2D matrix): T p = (x,y)p’ = (x’,y’)

2D image transformations These transformations are a nested set of groups Closed under composition and inverse is a member

Projective Transformations aka Homographies aka Planar Perspective Maps Called a homography (or planar perspective map)

Inverse Warping Get each pixel g(x’,y’) from its corresponding location (x,y) = T -1 (x,y) in f(x,y) f(x,y)g(x’,y’) xx’ T -1 (x,y) Requires taking the inverse of the transform y y’

Affine transformations

Matrix form 2n x 6 6 x 1 2n x 1

RANSAC General version: 1.Randomly choose s samples Typically s = minimum sample size that lets you fit a model 2.Fit a model (e.g., line) to those samples 3.Count the number of inliers that approximately fit the model 4.Repeat N times 5.Choose the model that has the largest set of inliers

Projecting images onto a common plane mosaic PP each image is warped with a homography Can’t create a 360 panorama this way…

Optical flow

Lucas-Kanade flow How to get more equations for a pixel? – Basic idea: impose additional constraints Lucas-Kanade: pretend the pixel’s neighbors have the same (u,v) – If we use a 5x5 window, that gives us 25 equations per pixel!

3D Geometry

Pinhole camera Add a barrier to block off most of the rays – This reduces blurring – The opening known as the aperture – How does this transform the image?

Perspective Projection Projection is a matrix multiply using homogeneous coordinates: divide by third coordinate This is known as perspective projection The matrix is the projection matrix

Projection matrix (t in book’s notation) translationrotationprojection intrinsics

l Point and line duality – A line l is a homogeneous 3-vector – It is  to every point (ray) p on the line: l p=0 p1p1 p2p2 What is the intersection of two lines l 1 and l 2 ? p is  to l 1 and l 2  p = l 1  l 2 Points and lines are dual in projective space l1l1 l2l2 p What is the line l spanned by rays p 1 and p 2 ? l is  to p 1 and p 2  l = p 1  p 2 l can be interpreted as a plane normal

Vanishing points Properties – Any two parallel lines (in 3D) have the same vanishing point v – The ray from C through v is parallel to the lines – An image may have more than one vanishing point in fact, every image point is a potential vanishing point image plane camera center C line on ground plane vanishing point V line on ground plane

Measuring height Camera height

Your basic stereo algorithm For each epipolar line For each pixel in the left image compare with every pixel on same epipolar line in right image pick pixel with minimum match cost Improvement: match windows

Stereo as energy minimization Better objective function {{ match cost smoothness cost Want each pixel to find a good match in the other image Adjacent pixels should (usually) move about the same amount

Fundamental matrix This epipolar geometry of two views is described by a Very Special 3x3 matrix, called the fundamental matrix maps (homogeneous) points in image 1 to lines in image 2! The epipolar line (in image 2) of point p is: Epipolar constraint on corresponding points: epipolar plane epipolar line 0 (projection of ray) Image 1Image 2

Epipolar geometry demo

8-point algorithm In reality, instead of solving, we seek f to minimize, least eigenvector of.

Structure from motion Camera 1 Camera 2 Camera 3 R 1,t 1 R 2,t 2 R 3,t 3 X1X1 X4X4 X3X3 X2X2 X5X5 X6X6 X7X7 minimize f (R, T, P)f (R, T, P) p 1,1 p 1,2 p 1,3 non-linear least squares

Stereo: another view error depth

Recognition

Face detection Do these images contain faces? Where?

Skin classification techniques Skin classifier Given X = (R,G,B): how to determine if it is skin or not? Nearest neighbor –find labeled pixel closest to X –choose the label for that pixel Data modeling –fit a model (curve, surface, or volume) to each class Probabilistic data modeling –fit a probability model to each class

Dimensionality reduction The set of faces is a “subspace” of the set of images Suppose it is K dimensional We can find the best subspace using PCA This is like fitting a “hyper-plane” to the set of faces –spanned by vectors v 1, v 2,..., v K –any face

Eigenfaces PCA extracts the eigenvectors of A Gives a set of vectors v 1, v 2, v 3,... Each one of these vectors is a direction in face space –what do these look like?

Bag-of-words models ….. frequency codewords

Related problem: binary segmentation Suppose we want to segment an image into foreground and background

Binary segmentation as energy minimization Define a labeling L as an assignment of each pixel with a 0-1 label (background or foreground) Problem statement: find the labeling L that minimizes {{ match cost smoothness cost (“how similar is each labeled pixel to the foreground / background?”)

Light, reflectance, cameras

Radiometry What determines the brightness of an image pixel? Light source properties Surface shape Surface reflectance properties Optics Sensor characteristics Slide by L. Fei-Fei Exposure

Classic reflection behavior ideal specular Lambertianrough specular from Steve Marschner

Photometric stereo N L1L1 L2L2 V L3L3 Can write this as a matrix equation:

Example

Computational photography The “analog” camera has changed very little in >100 yrs we’re unlikely to get there following this path More promising is to combine “analog” optics with computational techniques “Computational cameras” or “Computational photography” This lecture will survey techniques for producing higher quality images by combining optics and computation Common themes: take multiple photos modify the camera

Questions? Good luck!