Robust Estimator 學生 : 范育瑋 老師 : 王聖智. Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
Chapter 6 Confidence Intervals.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 3: Projective Geometry Matthias Rüther Slides courtesy of Marc Pollefeys Department of Computer.
N.D.GagunashviliUniversity of Akureyri, Iceland Pearson´s χ 2 Test Modifications for Comparison of Unweighted and Weighted Histograms and Two Weighted.
A Short Introduction to Curve Fitting and Regression by Brad Morantz
Multiple View Geometry
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Evaluating Hypotheses
Geometric Optimization Problems in Computer Vision.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Experimental Evaluation
Chi Square Distribution (c2) and Least Squares Fitting
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Fitting.
1/49 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 9 Estimation: Additional Topics.
CSE 473/573 RANSAC & Least Squares Devansh Arpit.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
Introduction to Error Analysis
Chapter 6 Confidence Intervals.
1 Robust estimation techniques in real-time robot vision Ezio Malis, Eric Marchand INRIA Sophia, projet ICARE INRIA Rennes, projet Lagadic.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Mid-Term Review Final Review Statistical for Business (1)(2)
Selecting Input Probability Distribution. Simulation Machine Simulation can be considered as an Engine with input and output as follows: Simulation Engine.
CHEMISTRY ANALYTICAL CHEMISTRY Fall Lecture 6.
RANSAC Robust model estimation from data contaminated by outliers Ondřej Chum.
NON-LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 6 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Fitting.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
CHAPTER 4 ESTIMATES OF MEAN AND ERRORS. 4.1 METHOD OF LEAST SQUARES I n Chapter 2 we defined the mean  of the parent distribution and noted that the.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
EE 7730 Parametric Motion Estimation. Bahadir K. Gunturk2 Parametric (Global) Motion Affine Flow.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
The simple linear regression model and parameter estimation
Chapter 4: Basic Estimation Techniques
The Maximum Likelihood Method
Chapter 4 Basic Estimation Techniques
Robot & Vision Lab. Wanjoo Park (revised by Dong-eun Seo)
Line Fitting James Hayes.
The Maximum Likelihood Method
A Brief Introduction of RANSAC
A special case of calibration
The Maximum Likelihood Method
Chapter 9 Hypothesis Testing.
CONCEPTS OF ESTIMATION
Modelling data and curve fitting
Chi Square Distribution (c2) and Least Squares Fitting
Regression Models - Introduction
Linear regression Fitting a straight line to observations.
Interval Estimation and Hypothesis Testing
Segmentation by fitting a model: robust estimators and RANSAC
5.2 Least-Squares Fit to a Straight Line
Introduction to Sensor Interpretation
Introduction to Sensor Interpretation
Calibration and homographies
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

Robust Estimator 學生 : 范育瑋 老師 : 王聖智

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

Introduction Objective: Robust fit of a model to a data set S which contains outliers.

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

LS Consider the data generating process Yi = b0 + b1Xi + ei, where ei is independently and identically distributed N(0, σ). If any outlier exists in the data. Least squares performs poorly.

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

LMS The method can tolerates the highest possible breakdown point of 50%.

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

Main idea

Algorithm a. Randomly select a sample of s data points from S and instantiate the model from this subset. b. Determine the set of data points Si which are within a distance threshold t of the model. c. If the size of Si (number of inliers) is greater than threshold T, re-estimate the model using all points in Si and terminate. d. If the size of Si is less than T, select a new subset and repeat above. e. After N trials the largest consequence ser Si is selected, and the model is re-estimate the model using all points in the subset Si.

Algorithm 1. What is the distance threshold? 2. How many sample? 3. How large is an acceptable consensus set?

What is the distance threshold? We would like to choose the distance threshold, t, such that with a probability α the point is an inlier. Assume the measurement error is Gaussian with zero mean and standard deviation σ. In this case the square of point distance, d 2 ⊥, is sum of squared Gaussian variables and follows a χ 2 m distribution with m degrees of freedom.

What is the distance threshold? Note: The probability that the value of a random variable is less than k 2 is given by the cumulative chi-squared distribution. Choose α as 0.95

How many sample? If w = proportion of inliers = 1-ε Prob(sample with all inliers)=w s Prob(sample with an outlier)=1-w s Prob(N samples an outlier)=(1-w s ) N We want Prob (N samples an outlier)<1-p Usually p is chosen at (1-w s ) N <1-p N>log(1-p)/log(1-w s )

How large is an acceptable consensus set? If we know the fraction of data consisting of outliers. Use a rule of thumb : T=(1- ε )n For example: Data points : n=12 The probability of outlier ε =0.2 T=(1-0.2)12=10

Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

THE TWO-VIEW RELATIONS

Maximum Likelihood Estimation Assume the measurement error is Gaussian with zero mean and standard deviation σ n: The number of correspondences M : The appropriate two-view relation, D : The set of matches.

Maximum Likelihood Estimation

Modify the mode: Where γ is the mixing parameter, v is just a constant. Here it is assumed that the outlier distribution is uniform, with - v/2,…,v/2 being the pixel range within which outliers are expected to fall.

How to estimate γ 1. The initial estimate of γ is ½ 2. Estimate the expectation of the η i from the current estimate of γ Where ηi =1 if the ith correspondence is an inlier, and ηi =0 if the ith correspondence is an outlier.

How to estimate γ p i is the likelihood of a datum given that it is an inlier and p o is the likelihood of a datum given that it is an outlier: 3.Make a new estimate of γ 4.Repeat step2 and step3 until convergence.

Outline Introduction LS-Least Squared LMS-Least Median Squared RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness

MINPRAN The first technique that reliably tolerates more than 50% outliers without assuming a know inlier bound. It only assumes the outliers are uniformly distributed within the dynamic range of the sensor.

MINPRAN

N: The points are drawn from a uniform distribution of z value in range Zmin to Zmax. r: A distance between a curve φ k: Least points randomly full with in the range φ± r Z 0 :(Zmax-Zmin)/2

MINPRAN

Algorithm 1. Assuming p points are required to completely instantiate a fit. 2. Chooses S distinct, but not necessarily disjoint subsets of p points from data, and finds the fit to each random subsets to form the data, and finds the fit to each random subset to form S hypothesized fits φ 1,…., φ S 3. Select the minimizing as the “best fit”. 4. If, then φ * is accepted as a correct fit. 5. A final least-squares fit involving the p + i* inliers to φ * produces a accurate estimate of the model parameters.

How many sample? If w = proportion of inliers = 1-ε Prob(sample with all inliers)=w p Prob(sample with an outlier)=1-w p Prob(S samples an outlier)=(1-w p ) S We want Prob (S samples an outlier)<1-p Usually p is chosen at (1-w p ) S <1-p S>log(1-p)/log(1-w p )

Randomness Threshold F 0 Choose probability P 0, and solve the equation. We can get Randomness Threshold F 0.

Randomness Threshold F 0 Define the equation There is a unique value,,which can found by bisection search, such that By Lemma 2, if and only if In order to for all i, we get the constraints c i, 0 ≦ i < N, denote the number of the residuals in the range (f i …f i+1 ]

Randomness Threshold F 0 Because the residuals are uniformly distributed, the probability any particular residuals are in the range f i to f i+1 is, where The probability c i particular residuals are in the range f i to f i+1 is Based on this, we can calculate the probability

Randomness Threshold F 0 To make the analysis feasible, we assume the S fits and their residuals are independent. So the probability each of the S samples has is just The probability at least one sample has a minimum less than is We can get F 0, since we know N, S, P 0.