Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jiří Matas and Onřej Chum Centre for Machine Perception

Similar presentations


Presentation on theme: "Jiří Matas and Onřej Chum Centre for Machine Perception"— Presentation transcript:

1 Jiří Matas and Onřej Chum Centre for Machine Perception
WaldSAC – Optimal Randomised RANSAC PROSAC - Progressive Sampling and Consensus Jiří Matas and Onřej Chum Centre for Machine Perception Czech Technical University, Prague

2 WaldSAC – Optimal Randomised RANSAC
References: Matas, Chum: Optimal Randomised RANSAC, International Conference on Computer Vision, Beijing, 2005. Chum, Matas: Randomized RANSAC with Td,d test, British Machine Vision Conference, Cardiff, 2002.

3 RANSAC – Time Complexity
Repeat k times (k is a function of h, I, N) 1. Hypothesis generation Select a sample of m data points Calculate parameters of the model(s) 2. Model verification Find the support (consensus set) by verifying all N data points Time Total running time: –- the number of inliers N - the number of data points h - confidence in the solution New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

4 RANSAC time complexity
where P is a probability of drawing an all-inlier sample where m is size of the sample New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

5 Randomised RANSAC [Matas, Chum 02]
Repeat k/(1-a) times 1. Hypothesis generation 2. Model pre-verification Td,d test Verify d << N data points, reject the model if not all d data points are consistent with the model 3. Model verification Verify the rest of the data points Time V - average number of data points verified a - probability that a good model is rejected by Td,d test New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

6 Optimal Randomised Strategy
Model Verification is Sequential Decision Making where Hg - hypothesis of a `good` model (≈ from an uncontaminated sample) Hb - hypothesis of a `bad` model, (≈ from a contaminated sample) - probability of a data point being consistent with an arbitrary model Optimal (the fastest) test that ensures with probability a that that Hg is not incorrectly rejected is the Sequential probability ratio test (SPRT) [Wald47] New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

7 SPRT [simplified from Wald 47]
Compute the likelihood ratio Two important properties of SPRT: probability of rejecting a \good\ model a < 1/A average number of verifications V=C log(A) New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

8 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
SPRT properties 1. Probability of rejecting a \good\ model a=1/A New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

9 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
Time Repeat k/(1-1/A) times 1. Hypothesis generation 2. Model verification use SPRT In sequential statistical decision problem decision errors are traded off for time. These are two incomparable quantities, hence the constrained optimization. In WaldSAC, decision errors cost time (more samples) and there is a single minimised quantity, time t(A), a function of a single parameter A. New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

10 Optimal test (optimal A) given e and d
Optimal A* found by solving New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

11 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
SPRT decision good model bad model New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

12 Exp. 1: Wide-baseline matching
New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

13 Exp. 2 Narrow-baseline stereo
New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

14 Randomised Verification in RANSAC: Conclusions
The same confidence h in the solution reached faster (data dependent, ¼ 10x) No change in the character of the algorithm, it was randomised anyway. Optimal strategy derived using Wald`s theory for known e and d. Results with e and d estimated during the course of RANSAC are not significantly different. Performance of SPRT is insensitive to errors in the estimate. d can be learnt, an initial estimate can be obtained by geometric consideration Lower bound on e is given by the best-so-far support Note that the properties of WaldSAC are quite different from preemptive RANSAC! New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

15 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
PROSAC - Progressive Sampling and Consensus O. Chum and J. Matas. Matching with PROSAC-progressive sampling consensus. CVPR, 2005. New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

16 PROSAC: Progressive Sampling and Consensus
Idea: exploit the fact that all correspondences are not created equal. In all (?) multi-view problems, the correspondences are from a totally ordered set (they are selected by thresholding of a similarity function) Challenge: organize computation in a way that exploits ordering, and yet is equally robust as RANSAC PROSAC properties: selection of correspondences and RANSAC integrated (no need for threshold on local similarity). Same guarantees about the solution as RANSAC Depending on the data, can be orders of magnitude faster New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

17 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
The PROSAC Algorithm New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

18 EG estimation experiment
The fraction of inliers in top n correspondences: New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

19 Multiple motion experiment
Note: What is an inlier does not depend on the on the similarity of appearance! New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

20 Can the probability of a correspondence by established?
New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

21 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
PROSAC: Conclusions In many cases, PROSAC draws only a few samples. The information needed to switch from RANSAC to PROSAC - ordering of features - is typically available Note that the set of data points that is sampled to hypothesise models and the set used in verification need not be the same. This suggest a combination with WaldSAC, but this is not straightforward (e is no more a constant) PROSAC experiment were a carried a version of RANSAC with protection against degenerate configurations and Local Optimisation (effectively we ran LO-DEGEN-PROSAC). This is important, since often top correspondences lie on the same plane. New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC

22 New York, RANSAC 25 @ CVPR06 WaldSAC - Optimal Randomised RANSAC
END New York, RANSAC CVPR WaldSAC - Optimal Randomised RANSAC


Download ppt "Jiří Matas and Onřej Chum Centre for Machine Perception"

Similar presentations


Ads by Google