Computational Photography

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Recognising Panoramas M. Brown and D. Lowe, University of British Columbia.
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Image alignment Image from
776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
Matching with Invariant Features
Computational Photography
Robust and large-scale alignment Image from
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Image alignment.
Recognising Panoramas
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
Automatic Panoramic Image Stitching using Local Features Matthew Brown and David Lowe, University of British Columbia.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Scale Invariant Feature Transform (SIFT)
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Feature Matching and RANSAC : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Image Stitching Ali Farhadi CSE 455
Image alignment.
CSE 185 Introduction to Computer Vision
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
The Brightness Constraint
Local invariant features Cordelia Schmid INRIA, Grenoble.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Example: line fitting. n=2 Model fitting Measure distances.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
CSE 185 Introduction to Computer Vision Feature Matching.
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features: detection and description
Presented by David Lee 3/20/2006
776 Computer Vision Jan-Michael Frahm Spring 2012.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
776 Computer Vision Jan-Michael Frahm Spring 2012.
SIFT Scale-Invariant Feature Transform David Lowe
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting a transformation: feature-based alignment
Project 1: hybrid images
TP12 - Local features: detection and description
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Local features: detection and description May 11th, 2017
Feature description and matching
RANSAC and mosaic wrap-up
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Features Readings All is Vanity, by C. Allan Gilbert,
Object recognition Prof. Graeme Bailey
The Brightness Constraint
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Feature Matching and RANSAC
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Image Stitching Computer Vision CS 678
SIFT.
CSE 185 Introduction to Computer Vision
Feature descriptors and matching
Presented by Xu Miao April 20, 2005
Recognition and Matching based on local invariant features
Calibration and homographies
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Computational Photography Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/01/2017 Computer Graphics

Last Time Panorama Feature detection

Today Panorama Feature matching Homography estimation With slides by C. Dyer, Y. Chuang, R. Szeliski, S. Seitz, M. Brown and V. Hlavac

Stitching Recipe Align pairs of images Align all to a common frame Feature Detection Feature Matching Homography Estimation Align all to a common frame Adjust (Global) & Blend

Invariant Local Features Goal: Detect the same interest points regardless of image changes due to translation, rotation, scale, viewpoint

Feature Point Descriptors After detecting points (and patches) in each image, Next question: How to match them? ? Point descriptor should be: Invariant Distinctive All the following slides are used from Prof. C. Dyer’s relevant course, except those with explicit acknowledgement.

Local Features: Description Detection: Identify the interest points Description: Extract feature vector for each interest point Matching: Determine correspondence between descriptors in two views

Geometric Transformations e.g. scale, translation, rotation

Photometric Transformations Figure from T. Tuytelaars ECCV 2006 tutorial

Raw Patches as Local Descriptors The simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector But this is very sensitive to even small shifts or rotations

Making Descriptors Invariant to Rotation Find local orientation Dominant direction of gradient: Compute description relative to this orientation 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004

SIFT Descriptor: Select Major Orientation Compute histogram of local gradient directions computed at selected scale in neighborhood of a feature point relative to dominant local orientation Compute gradients within sub-patches, and compute histogram of orientations using discrete “bins” Descriptor is rotation and scale invariant, and also has some illumination invariance (why?)

SIFT Descriptor Compute gradient orientation histograms on 4 x 4 neighborhoods over 16 x 16 array of locations in scale space around each keypoint position, relative to the keypoint orientation using thresholded image gradients from Gaussian pyramid level at keypoint’s scale Quantize orientations to 8 values 4 x 4 array of histograms SIFT feature vector of length 4 x 4 x 8 = 128 values for each keypoint Normalize the descriptor to make it invariant to intensity change Note, each bin actually is the sum of gradient magnitude of gradient with that direction. So normalize the descriptor can make it invariant to intensity change. ”the vector is normalized to unit length. A change in image contrast in which each pixel value is multiplied by a constant will multiply gradients by the same constant, so this contrast change will be canceled by vector normalization” D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints,” IJCV 2004

Feature Detection and Description Summary Stable (repeatable) feature points can currently be detected that are invariant to Rotation, scale, and affine transformations, but not to more general perspective and projective transformations Feature point descriptors can be computed, but are noisy due to use of differential operators are not invariant to projective transformations

Feature Matching

Wide-Baseline Feature Matching Standard approach for pair-wise matching: For each feature point in image A Find the feature point with the closest descriptor in image B From Schaffalitzky and Zisserman ’02

Wide-Baseline Feature Matching Compare the distance, d1, to the closest feature, to the distance, d2, to the second closest feature Accept if d1/d2 < 0.6 If the ratio of distances is less than a threshold, keep the feature Why the ratio test? Eliminates hard-to-match repeated features Distances in SIFT descriptor space seem to be non-uniform

Feature Matching Exhaustive search Hashing Nearest neighbor techniques for each feature in one image, look at all the other features in the other image(s) Hashing compute a short descriptor from each feature vector, or hash longer descriptors (randomly) Nearest neighbor techniques k-trees and their variants

Wide-Baseline Feature Matching Because of the high dimensionality of features, approximate nearest neighbors are necessary for efficient performance See ANN package, Mount and Arya http://www.cs.umd.edu/~mount/ANN/

Stitching Recipe Align pairs of images Align all to a common frame Feature Detection Feature Matching Homography Estimation Align all to a common frame Adjust (Global) & Blend

What can be globally aligned? In image stitching, we seek for a model to globally warp one image into another. Are any two images of the same scene can be aligned this way? Images captured with the same center of projection A planar scene or far-away scene Credit: Y.Y. Chuang Credit: Y.Y. Chuang

A pencil of rays contains all views real camera synthetic camera Can generate any synthetic camera view as long as it has the same center of projection! Credit: Y.Y. Chuang

Mosaic as an image reprojection mosaic projection plane The images are reprojected onto a common plane The mosaic is formed on this plane Mosaic is a synthetic wide-angle camera Credit: Y.Y. Chuang

Changing camera center Does it still work? PP1 PP2 Credit: Y.Y. Chuang

Planar scene (or a faraway one) PP3 PP1 PP2 PP3 is a projection plane of both centers of projection, so we are OK! This is how big aerial photographs are made Credit: Y.Y. Chuang

Motion models Parametric models as the assumptions on the relation between two images. Credit: Y.Y. Chuang

2D Motion models Credit: Y.Y. Chuang

Motion models Affine Perspective 3D rotation Translation 2 unknowns Credit: Y.Y. Chuang

Determine pairwise alignment? Feature-based methods: only use feature points to estimate parameters We will study the “Recognising panorama” paper published in ICCV 2003 Run SIFT (or other feature algorithms) for each image, find feature matches. Credit: Y.Y. Chuang

Determine pairwise alignment p’=Mp, where M is a transformation matrix, p and p’ are feature matches It is possible to use more complicated models such as affine or perspective For example, assume M is a 2x2 matrix Find M with the least square error Credit: Y.Y. Chuang

Determine pairwise alignment Over-determined system Credit: Y.Y. Chuang

Normal equation Given an over-determined system the normal equation is that which minimizes the sum of the square differences between left and right sides Why? Credit: Y.Y. Chuang

Normal equation Credit: Y.Y. Chuang

Determine pairwise alignment p’=Mp, where M is a transformation matrix, p and p’ are feature matches For translation model, it is easier. What if the match is false? Avoid impact of outliers. Credit: Y.Y. Chuang

RANSAC [Fischler and Bolles 81] RANSAC = Random Sample Consensus An algorithm for robust fitting of models in the presence of many data outliers Compare to robust statistics Given N data points xi, assume that majority of them are generated from a model with parameters , try to recover . Martin A. Fischler and Robert C. Bolles. "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography". Comm. of the ACM 24 (6): 381–395 M. A. Fischler and R. C. Bolles. "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography". Comm. of the ACM 24 (6): 381–395 Credit: Y.Y. Chuang

RANSAC algorithm Run k times: (1) draw n samples randomly How many times? Run k times: (1) draw n samples randomly (2) fit parameters  with these n samples (3) for each of other N-n points, calculate its distance to the fitted model, count the number of inlier points, c Output  with the largest c How big? Smaller is better How to define? Depends on the problem. Credit: Y.Y. Chuang

How to determine k p: probability of real inliers P: probability of success after k trials n samples are all inliers a failure failure after k trials n p k 3 0.5 35 6 0.6 97 293 for P=0.99 Credit: Y.Y. Chuang

Example: line fitting Credit: Y.Y. Chuang

Example: line fitting n=2 Credit: Y.Y. Chuang

Model fitting Credit: Y.Y. Chuang

Measure distances Credit: Y.Y. Chuang

Count inliers c=3 Credit: Y.Y. Chuang

Another trial c=3 Credit: Y.Y. Chuang

The best model c=15 Credit: Y.Y. Chuang

RANSAC for Homography Credit: Y.Y. Chuang

RANSAC for Homography Credit: Y.Y. Chuang

RANSAC for Homography Credit: Y.Y. Chuang

Next Time Panorama Blending Multi-perspective panoramas