Robot & Vision Lab. Wanjoo Park (revised by Dong-eun Seo)

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Evaluating Classifiers
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 3: Projective Geometry Matthias Rüther Slides courtesy of Marc Pollefeys Department of Computer.
14.1 ARITHMETIC MEAN Experimental Readings are scattered around amean value Fig Scatter of the readings around the mean value.
Assuming normally distributed data! Naïve Bayes Classifier.
Multiple View Geometry
Robust Estimator 學生 : 范育瑋 老師 : 王聖智. Outline Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood.
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Geometric Optimization Problems in Computer Vision.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography Martin A. Fischler, Robert C. Bolles.
Fitting.
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
IMAGE MOSAICING Summer School on Document Image Processing
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
Chapter 6: Random Errors in Chemical Analysis CHE 321: Quantitative Chemical Analysis Dr. Jerome Williams, Ph.D. Saint Leo University.
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
RANSAC Robust model estimation from data contaminated by outliers Ondřej Chum.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Fitting image transformations Prof. Noah Snavely CS1114
Machine Learning 5. Parametric Methods.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
Chapter Eleven Sample Size Determination Chapter Eleven.
MATH Section 4.4.
CHAPTER 4 ESTIMATES OF MEAN AND ERRORS. 4.1 METHOD OF LEAST SQUARES I n Chapter 2 we defined the mean  of the parent distribution and noted that the.
Chapter 6: Random Errors in Chemical Analysis. 6A The nature of random errors Random, or indeterminate, errors can never be totally eliminated and are.
EE 7730 Parametric Motion Estimation. Bahadir K. Gunturk2 Parametric (Global) Motion Affine Flow.
1 ENGINEERING MEASUREMENTS Prof. Emin Korkut. 2 Statistical Methods in Measurements.
Line fitting.
Multiple Regression Analysis: Inference
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Hypothesis Testing: One Sample Cases
CPSC 641: Image Registration
Sampling Distributions
CHAPTER 6 Random Variables
Statistical Quality Control, 7th Edition by Douglas C. Montgomery.
Sample Size Determination
Classification of unlabeled data:
Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform
A Brief Introduction of RANSAC
Lecture 7: Image alignment
Chapter 18: Sampling Distribution Models
Sampling Fundamentals 2
RANSAC and mosaic wrap-up
CS 2750: Machine Learning Linear Regression
A special case of calibration
Physics 114: Exam 2 Review Material from Weeks 7-11
Georgi Iskrov, MBA, MPH, PhD Department of Social Medicine
Fitting.
9 Tests of Hypotheses for a Single Sample CHAPTER OUTLINE
MATH 2311 Section 4.4.
Geology Geomath Chapter 7 - Statistics tom.h.wilson
Arithmetic Mean This represents the most probable value of the measured variable. The more readings you take, the more accurate result you will get.
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
5.2 Least-Squares Fit to a Straight Line
Introduction to Sensor Interpretation
Computational Photography
Introduction to Sensor Interpretation
Calibration and homographies
Chapter 9 Estimation: Additional Topics
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Parameter estimation class 6
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
MATH 2311 Section 4.4.
Presentation transcript:

Robot & Vision Lab. Wanjoo Park (revised by Dong-eun Seo) RANSAC Robot & Vision Lab. Wanjoo Park (revised by Dong-eun Seo)

What is RANSAC ? A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data. Authors are Martin A. Fischler and Robert C. Bolles Communications of the ACM, June 1981

Interpretation involves two distinct activities First There is the problem of finding the best match between the data and one of the available models (the classification problem). Second there is the problem of computing the best values for the free parameters of the selected model (the parameter estimation problem).

Two types of Errors Measurement Error Classification Error Measurement errors occur when the feature detector correctly identifies the feature, but slightly miscalculates one of its parameters Measurement errors generally follow a normal distribution, and therefore the smoothing assumption is applicable to them. Classification Error Classification errors occur when a feature detector incorrectly identifies a portion of an image as an occurrence of a feature. Classification errors, however, are gross errors, having a significantly larger effect than measurement errors, and do not average out.

Gross Error

Data set of Experimentation Original Data set : 12 elements (y=x) Add Gaussian noise with zero mean, ½ variance Add 2 gross error

Conventional Least Square Eq.1 Eq.2 Eq.3

RANSAC – Algorithm Randomly select a sample of s data points from S (data set) and instantiate the model from this subset. Determine the set of data points Si which are within a distance threshold t of the model. The set Si is the consensus set of sample and defines the inliers of S. If the size of Si (the number of inliers) is greater than some threshold T (the size of consensus set), re-estimate the model using all the points in Si and terminate. If the size of Si is less than T, select a new subset and repeat the above. After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si.

Randomly select sample set Randomly selected 2 points Determine line equation Number of pair is 66

Inlier and Outlier Threshold: , α is probability that the point is an inlier For example α=0.95 t Inlier d2 < t2 Outlier d2 ≥ t2 d Threshold 결정은 일반적으로 경험적으로 결정한다. 여기서는 noise가 zero mean Gaussian 이라는 가정하에 d^2은 Gaussian variable의 sum 이라 생각해서 cha-square distribution을 따른다고 modeling 하였다.

How many Samples ? 1) W is the probability that any selected data point is an inlier, and thus ε=1-w is the probability that it is an outlier. K is number of random selection.

SD(k) will be approximately equal to E(k) : In general, we would probably want to exceed E(k) trials by one or two standard deviations before we give up. Standard deviation SD(k) will be approximately equal to E(k) : This means that one might want to try two or three times the expected number of random selections

How many Samples ? 2) if we want to ensure with probability z that at least one of our random selections is an error-free set of n data points At least z selections are required, where so that Examples of k for z = 0.99

How large is an acceptable consensus set? A rule of thumb is to terminate if the size of the consensus set is similar to the number of inliers believed to be in the data set, given the assumed proportion of outliers. T = (1-ε)n = (1-0.1667)12 = 10

Result of Consensus set

The Last Result RANSAC Conventional Least square

The Last Result 2nd order curve y = ax^2+bx+c : a=1 ,b =-20, c= 110 Datum are corrupted by zero mean, 0.9 SD Gaussian noise and three gross errors. Number of data : 20, Gross error : 3 3/20 = 15/100 = 0.15

final estimation of a,b,c E(k)=2.0 -> k = 2.0*2 = 4 Estimated a,b,c Number of inlier final estimation of a,b,c Maximum

plot : Real curve : Estemated curve : Measured data

Thank you for your attention Practice makes Perfect Only One Step at a time Robot & Vision Lab. Wanjoo Park – (revised by Dong-eun Seo)