CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely

Slides:



Advertisements
Similar presentations
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
Advertisements

776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Multiple View Geometry
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Lecture 10: Cameras CS4670: Computer Vision Noah Snavely Source: S. Lazebnik.
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Computing transformations Prof. Noah Snavely CS1114
Fitting.
Feature Matching and RANSAC : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Robust fitting Prof. Noah Snavely CS1114
Image Stitching Ali Farhadi CSE 455
Image alignment.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Computer Vision - Fitting and Alignment
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Example: line fitting. n=2 Model fitting Measure distances.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Fitting image transformations Prof. Noah Snavely CS1114
CSE 185 Introduction to Computer Vision Feature Matching.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Lecture 10: Image alignment CS4670/5760: Computer Vision Noah Snavely
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Lecture 16: Image alignment
SIFT Scale-Invariant Feature Transform David Lowe
University of Ioannina
Fitting a transformation: feature-based alignment
Fitting.
A Brief Introduction of RANSAC
Lecture 7: Image alignment
RANSAC and mosaic wrap-up
Homography From Wikipedia In the field of computer vision, any
CS 2750: Machine Learning Linear Regression
A special case of calibration
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Features Readings All is Vanity, by C. Allan Gilbert,
Object recognition Prof. Graeme Bailey
Fitting.
Feature Matching and RANSAC
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
776 Computer Vision Jan-Michael Frahm.
Lecture 8: Image alignment
Lecture 8: Image alignment
Introduction to Sensor Interpretation
CSE 185 Introduction to Computer Vision
Computational Photography
Introduction to Sensor Interpretation
Calibration and homographies
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Parameter estimation class 6
Image Stitching Linda Shapiro ECE/CSE 576.
Lecture 11: Image alignment, Part 2
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely http://www.wired.com/gadgetlab/2010/07/camera-software-lets-you-see-into-the-past/

Reading Szeliski: Chapter 6.1

Outliers outliers inliers

Robustness Let’s consider a simpler example… linear regression How can we fix this? Problem: Fit a line to these datapoints Least squares fit

We need a better cost function… Suggestions?

Idea Given a hypothesized line Count the number of points that “agree” with the line “Agree” = within a small distance of the line I.e., the inliers to that line For all possible lines, select the one with the largest number of inliers

Counting inliers

Counting inliers Inliers: 3

Counting inliers Inliers: 20

How do we find the best line? Unlike least-squares, no simple closed-form solution Hypothesize-and-test Try out many lines, keep the best one Which lines?

Translations

RAndom SAmple Consensus Select one match at random, count inliers

RAndom SAmple Consensus Select another match at random, count inliers

RAndom SAmple Consensus Output the translation with the highest number of inliers

RANSAC Idea: All the inliers will agree with each other on the translation vector; the (hopefully small) number of outliers will (hopefully) disagree with each other RANSAC only has guarantees if there are < 50% outliers “All good matches are alike; every bad match is bad in its own way.” – Tolstoy via Alyosha Efros

RANSAC Inlier threshold related to the amount of noise we expect in inliers Often model noise as Gaussian with some standard deviation (e.g., 3 pixels) Number of rounds related to the percentage of outliers we expect, and the probability of success we’d like to guarantee Suppose there are 20% outliers, and we want to find the correct answer with 99% probability How many rounds do we need?

RANSAC y translation x translation set threshold so that, e.g., 95% of the Gaussian lies inside that radius x translation

RANSAC x y Back to linear regression How do we generate a hypothesis?

RANSAC Back to linear regression How do we generate a hypothesis? y x

RANSAC General version: Randomly choose s samples Typically s = minimum sample size that lets you fit a model Fit a model (e.g., line) to those samples Count the number of inliers that approximately fit the model Repeat N times Choose the model that has the largest set of inliers

proportion of outliers e How many rounds? If we have to choose s samples each time with an outlier ratio e and we want the right answer with probability p proportion of outliers e s 5% 10% 20% 25% 30% 40% 50% 2 3 5 6 7 11 17 4 9 19 35 13 34 72 12 26 57 146 16 24 37 97 293 8 20 33 54 163 588 44 78 272 1177 p = 0.99 Source: M. Pollefeys

How big is s? For alignment, depends on the motion model Here, each sample is a correspondence (pair of matching points)

RANSAC pros and cons Pros Cons Simple and general Applicable to many different problems Often works well in practice Cons Parameters to tune Sometimes too many iterations are required Can fail for extremely low inlier ratios We can often do better than brute-force sampling

Final step: least squares fit Find average translation vector over all inliers

RANSAC An example of a “voting”-based fitting scheme Each hypothesis gets voted on by each data point, best hypothesis wins There are many other types of voting schemes E.g., Hough transforms…

Panoramas Now we know how to create panoramas! Given two images: Step 1: Detect features Step 2: Match features Step 3: Compute a homography using RANSAC Step 4: Combine the images together (somehow) What if we have more than two images?

Can we use homographies to create a 360 panorama? In order to figure this out, we need to learn what a camera is

360 panorama

Questions?