Jiří Matas and Onřej Chum Centre for Machine Perception

Slides:



Advertisements
Similar presentations
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Advertisements

Chapter 6 Feature-based alignment Advanced Computer Vision.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
Module F: Simulation. Introduction What: Simulation Where: To duplicate the features, appearance, and characteristics of a real system Why: To estimate.
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Reduced Support Vector Machine
Evaluating Hypotheses
Geometric Optimization Problems in Computer Vision.
Triangulation and Multi-View Geometry Class 9 Read notes Section 3.3, , 5.1 (if interested, read Triggs’s paper on MVG using tensor notation, see.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
The Analysis of Variance
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography Martin A. Fischler, Robert C. Bolles.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
ECONOMETRICS I CHAPTER 5: TWO-VARIABLE REGRESSION: INTERVAL ESTIMATION AND HYPOTHESIS TESTING Textbook: Damodar N. Gujarati (2004) Basic Econometrics,
1  The goal is to estimate the error probability of the designed classification system  Error Counting Technique  Let classes  Let data points in class.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
Computer Vision - Fitting and Alignment
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Chapter 9 Hypothesis Testing and Estimation for Two Population Parameters.
The Correspondence Problem and “Interest Point” Detection Václav Hlaváč Center for Machine Perception Czech Technical University Prague
Example: line fitting. n=2 Model fitting Measure distances.
Large Scale Discovery of Spatially Related Images Ondřej Chum and Jiří Matas Center for Machine Perception Czech Technical University Prague.
Random Thoughts 2012 (COMP 066) Jan-Michael Frahm Jared Heinly.
18 th August 2006 International Conference on Pattern Recognition 2006 Epipolar Geometry from Two Correspondences Michal Perďoch, Jiří Matas, Ondřej Chum.
RANSAC Robust model estimation from data contaminated by outliers Ondřej Chum.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Fitting image transformations Prof. Noah Snavely CS1114
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
Fitting.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Line fitting.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching Link: singhashwini.mesinghashwini.me.
Virtual University of Pakistan
Virtual University of Pakistan
25 Years of RANSAC 25 Years of RANSAC Ondřej Chum
PSY 626: Bayesian Statistics for Psychological Science
ECO 173 Chapter 10: Introduction to Estimation Lecture 5a
Fitting.
Introduction to estimation: 2 cases
A Brief Introduction of RANSAC
Lecture 7: Image alignment
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Homography From Wikipedia In the field of computer vision, any
CS 2750: Machine Learning Linear Regression
A special case of calibration
Common Classification Tasks
Features Readings All is Vanity, by C. Allan Gilbert,
PSY 626: Bayesian Statistics for Psychological Science
Object recognition Prof. Graeme Bailey
LECTURE 05: THRESHOLD DECODING
Hidden Markov Models Part 2: Algorithms
I. Statistical Tests: Why do we use them? What do they involve?
Segmentation by fitting a model: robust estimators and RANSAC
Subscript and Summation Notation
Introduction to Estimation
LECTURE 05: THRESHOLD DECODING
Calibration and homographies
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Machine Learning: Lecture 5
Presentation transcript:

Jiří Matas and Onřej Chum Centre for Machine Perception WaldSAC – Optimal Randomised RANSAC PROSAC - Progressive Sampling and Consensus Jiří Matas and Onřej Chum Centre for Machine Perception Czech Technical University, Prague

WaldSAC – Optimal Randomised RANSAC References: Matas, Chum: Optimal Randomised RANSAC, International Conference on Computer Vision, Beijing, 2005. Chum, Matas: Randomized RANSAC with Td,d test, British Machine Vision Conference, Cardiff, 2002.

RANSAC – Time Complexity Repeat k times (k is a function of h, I, N) 1. Hypothesis generation Select a sample of m data points Calculate parameters of the model(s) 2. Model verification Find the support (consensus set) by verifying all N data points Time Total running time: –- the number of inliers N - the number of data points h - confidence in the solution New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

RANSAC time complexity where P is a probability of drawing an all-inlier sample where m is size of the sample New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Randomised RANSAC [Matas, Chum 02] Repeat k/(1-a) times 1. Hypothesis generation 2. Model pre-verification Td,d test Verify d << N data points, reject the model if not all d data points are consistent with the model 3. Model verification Verify the rest of the data points Time V - average number of data points verified a - probability that a good model is rejected by Td,d test New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Optimal Randomised Strategy Model Verification is Sequential Decision Making where Hg - hypothesis of a `good` model (≈ from an uncontaminated sample) Hb - hypothesis of a `bad` model, (≈ from a contaminated sample) - probability of a data point being consistent with an arbitrary model Optimal (the fastest) test that ensures with probability a that that Hg is not incorrectly rejected is the Sequential probability ratio test (SPRT) [Wald47] New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

SPRT [simplified from Wald 47] Compute the likelihood ratio Two important properties of SPRT: probability of rejecting a \good\ model a < 1/A average number of verifications V=C log(A) New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC SPRT properties 1. Probability of rejecting a \good\ model a=1/A New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC Time Repeat k/(1-1/A) times 1. Hypothesis generation 2. Model verification use SPRT In sequential statistical decision problem decision errors are traded off for time. These are two incomparable quantities, hence the constrained optimization. In WaldSAC, decision errors cost time (more samples) and there is a single minimised quantity, time t(A), a function of a single parameter A. New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Optimal test (optimal A) given e and d Optimal A* found by solving New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC SPRT decision good model bad model New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Exp. 1: Wide-baseline matching New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Exp. 2 Narrow-baseline stereo New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Randomised Verification in RANSAC: Conclusions The same confidence h in the solution reached faster (data dependent, ¼ 10x) No change in the character of the algorithm, it was randomised anyway. Optimal strategy derived using Wald`s theory for known e and d. Results with e and d estimated during the course of RANSAC are not significantly different. Performance of SPRT is insensitive to errors in the estimate. d can be learnt, an initial estimate can be obtained by geometric consideration Lower bound on e is given by the best-so-far support Note that the properties of WaldSAC are quite different from preemptive RANSAC! New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC PROSAC - Progressive Sampling and Consensus O. Chum and J. Matas. Matching with PROSAC-progressive sampling consensus. CVPR, 2005. New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

PROSAC: Progressive Sampling and Consensus Idea: exploit the fact that all correspondences are not created equal. In all (?) multi-view problems, the correspondences are from a totally ordered set (they are selected by thresholding of a similarity function) Challenge: organize computation in a way that exploits ordering, and yet is equally robust as RANSAC PROSAC properties: selection of correspondences and RANSAC integrated (no need for threshold on local similarity). Same guarantees about the solution as RANSAC Depending on the data, can be orders of magnitude faster New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC The PROSAC Algorithm New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

EG estimation experiment The fraction of inliers in top n correspondences: New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Multiple motion experiment Note: What is an inlier does not depend on the on the similarity of appearance! New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

Can the probability of a correspondence by established? New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC PROSAC: Conclusions In many cases, PROSAC draws only a few samples. The information needed to switch from RANSAC to PROSAC - ordering of features - is typically available Note that the set of data points that is sampled to hypothesise models and the set used in verification need not be the same. This suggest a combination with WaldSAC, but this is not straightforward (e is no more a constant) PROSAC experiment were a carried a version of RANSAC with protection against degenerate configurations and Local Optimisation (effectively we ran LO-DEGEN-PROSAC). This is important, since often top correspondences lie on the same plane. New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC

New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC END New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC