RANSAC Robust model estimation from data contaminated by outliers Ondřej Chum.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Sampling: Final and Initial Sample Size Determination
Sampling Distributions (§ )
Chapter 6 Feature-based alignment Advanced Computer Vision.
Photo by Carl Warner.
Multi-model Estimation with J-linkage Jeongkyun Lee.
Robust Estimator 學生 : 范育瑋 老師 : 王聖智. Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood.
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Geometric Optimization Problems in Computer Vision.
Lecture 10: Cameras CS4670: Computer Vision Noah Snavely Source: S. Lazebnik.
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
1 Inference About a Population Variance Sometimes we are interested in making inference about the variability of processes. Examples: –Investors use variance.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography Martin A. Fischler, Robert C. Bolles.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Fitting.
CSE 473/573 RANSAC & Least Squares Devansh Arpit.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Fitting and Registration
Robust fitting Prof. Noah Snavely CS1114
Review of normal distribution. Exercise Solution.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
Computer Vision - Fitting and Alignment
Parallel implementation of RAndom SAmple Consensus (RANSAC) Adarsh Kowdle.
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
1 Robust estimation techniques in real-time robot vision Ezio Malis, Eric Marchand INRIA Sophia, projet ICARE INRIA Rennes, projet Lagadic.
Today’s lesson Confidence intervals for the expected value of a random variable. Determining the sample size needed to have a specified probability of.
Fitting and Registration Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/14/12.
Example: line fitting. n=2 Model fitting Measure distances.
Random Thoughts 2012 (COMP 066) Jan-Michael Frahm Jared Heinly.
Large sample CI for μ Small sample CI for μ Large sample CI for p
Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.
18 th August 2006 International Conference on Pattern Recognition 2006 Epipolar Geometry from Two Correspondences Michal Perďoch, Jiří Matas, Ondřej Chum.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Least Squares Estimate Additional Notes 1. Introduction The quality of an estimate can be judged using the expected value and the covariance matrix of.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
COS 429 PS3: Stitching a Panorama Due November 10 th.
EE 7730 Parametric Motion Estimation. Bahadir K. Gunturk2 Parametric (Global) Motion Affine Flow.
Line fitting.
Chapter 6: Sampling Distributions
25 Years of RANSAC 25 Years of RANSAC Ondřej Chum
(5) Notes on the Least Squares Estimate
Probability Theory and Parameter Estimation I
University of Ioannina
Robot & Vision Lab. Wanjoo Park (revised by Dong-eun Seo)
CENG 789 – Digital Geometry Processing 10- Least-Squares Solutions
Statistical Inference
A Brief Introduction of RANSAC
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Homography From Wikipedia In the field of computer vision, any
CS 2750: Machine Learning Linear Regression
Jiří Matas and Onřej Chum Centre for Machine Perception
A special case of calibration
Interval Estimation.
CHAPTER 22: Inference about a Population Proportion
Segmentation by fitting a model: robust estimators and RANSAC
CENG 789 – Digital Geometry Processing 11- Least-Squares Solutions
Introduction to Sensor Interpretation
AP Statistics Chapter 12 Notes.
Sampling Distributions (§ )
Introduction to Sensor Interpretation
Calibration and homographies
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

RANSAC Robust model estimation from data contaminated by outliers Ondřej Chum

Fitting a Line Least squares fit

RANSAC Select sample of m points at random

RANSAC Select sample of m points at random Calculate model parameters that fit the data in the sample

RANSAC Select sample of m points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point

RANSAC Select sample of m points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis

RANSAC Select sample of m points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis Repeat sampling

RANSAC Select sample of m points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis Repeat sampling

How Many Samples? On average mean time before the success E(k) = 1 / P(good) N … number of point I… number of inliers m… size of the sample P(good) =

How Many Samples? With confidence p

How Many Samples? With confidence p P(good) = P(bad) = 1 – P(good) N … number of point I… number of inliers m… size of the sample P(bad k times) = ( 1 – P(good) ) k

How Many Samples? With confidence p P(bad k times) = ( 1 – P(good) ) k ≤ 1 - p k log ( 1 – P(good) ) ≤ log(1 – p) k ≥ log(1 – p) / log ( 1 – P(good) )

How Many Samples I / N [%] Size of the sample m

RANSAC k … number of samples drawn N … number of data points I … time to compute a single model p … confidence in the solution (.95) log ( 1- ) log(1 – p) I N I-1 N-1 k =

RANSAC [Fischler, Bolles ’81] In: U = {x i } set of data points, |U| = N function f computes model parameters p given a sample S from U the cost function for a single data point x Out: p * p *, parameters of the model maximizing the cost function k := 0 Repeat until P{better solution exists} <  (a function of C * and no. of steps k) k := k + 1 I. Hypothesis (1) select randomly set, sample size (2) compute parameters II. Verification (3) compute cost (4) if C * < C k then C * := C k, p * := p k end

PROSAC – PROgressive SAmple Consensus Not all correspondences are created equally Some are better than others Sample from the best candidates first …N-2N-1N Sample from here

PROSAC Samples l-1ll+1 l+2 … … Draw T l samples from (1 … l) Draw T l+1 samples from (1 … l+1) Samples from (1 … l) that are not from (1 … l+1) contain l+1 Draw T l+1 - T l samples of size m-1 and add l+1