Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.

Slides:



Advertisements
Similar presentations
Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
Advertisements

Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 3: Projective Geometry Matthias Rüther Slides courtesy of Marc Pollefeys Department of Computer.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Photo by Carl Warner.
Multi-model Estimation with J-linkage Jeongkyun Lee.
776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
CS 558 C OMPUTER V ISION Lecture VIII: Fitting, RANSAC, and Hough Transform Adapted from S. Lazebnik.
Multiple View Geometry
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Computing F Class 8 Read notes Section 4.2. C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points.
Fitting: Concepts and recipes. Review: The Hough transform What are the steps of a Hough transform? What line parametrization is best for the Hough transform?
Geometric Optimization Problems in Computer Vision.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography Martin A. Fischler, Robert C. Bolles.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
Fitting.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Fitting & Matching Lectures 7 & 8 – Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Fitting and Registration
Image Stitching Ali Farhadi CSE 455
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Computer Vision - Fitting and Alignment
Fitting and Registration Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/14/12.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Photo by Carl Warner. Feature Matching and Robust Fitting Computer Vision James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe.
CS 1699: Intro to Computer Vision Matching and Fitting Prof. Adriana Kovashka University of Pittsburgh September 29, 2015.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
CSE 185 Introduction to Computer Vision Feature Matching.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
EE 7730 Parametric Motion Estimation. Bahadir K. Gunturk2 Parametric (Global) Motion Affine Flow.
CS 558 C OMPUTER V ISION Fitting, RANSAC, and Hough Transfrom Adapted from S. Lazebnik.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
Segmentation by fitting a model: Hough transform and least squares
Fitting a transformation: feature-based alignment
Fitting.
Line Fitting James Hayes.
Fitting: The Hough transform
Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform
Lecture 7: Image alignment
CS 2750: Machine Learning Linear Regression
A special case of calibration
Fitting.
Photo by Carl Warner.
Photo by Carl Warner.
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
776 Computer Vision Jan-Michael Frahm.
Introduction to Sensor Interpretation
CSE 185 Introduction to Computer Vision
Introduction to Sensor Interpretation
Calibration and homographies
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Parameter estimation class 6
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

Fitting

We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model 9300 Harris Corners Pkwy, Charlotte, NC

Source: K. Grauman Fitting Choose a parametric model to represent a set of features simple model: lines simple model: circles complicated model: car

Fitting: Issues Noise in the measured feature locations Extraneous data: clutter (outliers), multiple lines Missing data: occlusions Case study: Line detection

Fitting: Overview If we know which points belong to the line, how do we find the “optimal” line parameters? Least squares What if there are outliers? Robust fitting, RANSAC What if there are many lines? Voting methods: RANSAC, Hough transform What if we’re not even sure it’s a line? Model selection

Least squares line fitting Data: (x 1, y 1 ), …, (x n, y n ) Line equation: y i = m x i + b Find ( m, b ) to minimize Normal equations: least squares solution to XB=Y (x i, y i ) y=mx+b

Problem with “vertical” least squares Not rotation-invariant Fails completely for vertical lines

Total least squares Distance between point (x i, y i ) and line ax+by=d (a 2 +b 2 =1): |ax i + by i – d| (x i, y i ) ax+by=d Unit normal: N=(a, b)

Total least squares Distance between point (x i, y i ) and line ax+by=d (a 2 +b 2 =1): |ax i + by i – d| Find (a, b, d) to minimize the sum of squared perpendicular distances (x i, y i ) ax+by=d Unit normal: N=(a, b)

Total least squares Distance between point (x i, y i ) and line ax+by=d (a 2 +b 2 =1): |ax i + by i – d| Find (a, b, d) to minimize the sum of squared perpendicular distances (x i, y i ) ax+by=d Unit normal: N=(a, b) Solution to ( U T U)N = 0, subject to ||N|| 2 = 1 : eigenvector of U T U associated with the smallest eigenvalue (least squares solution to homogeneous linear system UN = 0 )

Total least squares second moment matrix

Total least squares N = (a, b) second moment matrix

Least squares as likelihood maximization Generative model: line points are sampled independently and corrupted by Gaussian noise in the direction perpendicular to the line (x, y) ax+by=d (u, v) ε point on the line noise: sampled from zero-mean Gaussian with std. dev. σ normal direction

Least squares as likelihood maximization Likelihood of points given line parameters (a, b, d): Log-likelihood: (x, y) ax+by=d (u, v) ε Generative model: line points are sampled independently and corrupted by Gaussian noise in the direction perpendicular to the line

Least squares: Robustness to noise Least squares fit to the red points:

Least squares: Robustness to noise Least squares fit with an outlier: Problem: squared error heavily penalizes outliers

Robust estimators General approach: find model parameters θ that minimize r i (x i, θ) – residual of ith point w.r.t. model parameters θ ρ – robust function with scale parameter σ The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u

Choosing the scale: Just right The effect of the outlier is minimized

The error value is almost the same for every point and the fit is very poor Choosing the scale: Too small

Choosing the scale: Too large Behaves much the same as least squares

Robust estimation: Details Robust fitting is a nonlinear optimization problem that must be solved iteratively Least squares solution can be used for initialization Adaptive choice of scale: approx. 1.5 times median residual (F&P, Sec )

RANSAC Robust fitting can deal with a few outliers – what if we have very many? Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers Outline Choose a small subset of points uniformly at random Fit a model to that subset Find all remaining points that are “close” to the model and reject the rest as outliers Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp , 1981.Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

RANSAC for line fitting example Source: R. Raguram

RANSAC for line fitting example Least-squares fit Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

30 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

31 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Uncontaminated sample Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

RANSAC for line fitting Repeat N times: Draw s points uniformly at random Fit line to these s points Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t) If there are d or more inliers, accept the line and refit using all inliers

Choosing the parameters Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

Choosing the parameters Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) proportion of outliers e s5%10%20%25%30%40%50% Source: M. Pollefeys

Choosing the parameters Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

Choosing the parameters Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Consensus set size d Should match expected inlier ratio Source: M. Pollefeys

Adaptively determining the number of samples Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 Adaptive procedure: N=∞, sample_count =0 While N >sample_count –Choose a sample and count the number of inliers –Set e = 1 – (number of inliers)/(total number of points) –Recompute N from e: –Increment the sample_count by 1 Source: M. Pollefeys

RANSAC pros and cons Pros Simple and general Applicable to many different problems Often works well in practice Cons Lots of parameters to tune Doesn’t work well for low inlier ratios (too many iterations, or can fail completely) Can’t always get a good initialization of the model based on the minimum number of samples