Download presentation
Presentation is loading. Please wait.
1
Fitting
2
University of Missouri at Columbia
Fitting Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local can’t tell whether a set of points lies on a line by looking only at each point and the next. Three main questions: what object represents this set of tokens best? which of several objects gets which token? how many objects are there? (you could read line for object here, or circle, or ellipse or...) CS8690 Computer Vision University of Missouri at Columbia
3
Hough transform : straight lines
implementation : 1. the parameter space is discretised 2. a counter is incremented at each cell where the lines pass 3. peaks are detected CS8690 Computer Vision University of Missouri at Columbia
4
Hough transform : straight lines
problem : unbounded parameter domain vertical lines require infinite a alternative representation: Each point will add a cosine function in the (,) parameter space CS8690 Computer Vision University of Missouri at Columbia
5
University of Missouri at Columbia
tokens Figure 15.1, top half. Note that most points in the vote array are very dark, because they get only one vote. votes CS8690 Computer Vision University of Missouri at Columbia
6
Hough transform : straight lines
Square : Circle : CS8690 Computer Vision University of Missouri at Columbia
7
Hough transform : straight lines
CS8690 Computer Vision University of Missouri at Columbia
8
Mechanics of the Hough transform
Construct an array representing q, d For each point, render the curve (q, d) into this array, adding one at each cell Difficulties how big should the cells be? (too big, and we cannot distinguish between quite different lines; too small, and noise causes lines to be missed) How many lines? count the peaks in the Hough array Who belongs to which line? tag the votes Hardly ever satisfactory in practice, because problems with noise and cell size defeat it CS8690 Computer Vision University of Missouri at Columbia
9
University of Missouri at Columbia
This is 15.1 lower half tokens votes CS8690 Computer Vision University of Missouri at Columbia
10
University of Missouri at Columbia
15.2; main point is that lots of noise can lead to large peaks in the array CS8690 Computer Vision University of Missouri at Columbia
11
University of Missouri at Columbia
This is the number of votes that the real line of 20 points gets with increasing noise (figure15.3) CS8690 Computer Vision University of Missouri at Columbia
12
University of Missouri at Columbia
Figure 15.4; as the noise increases in a picture without a line, the number of points in the max cell goes up, too CS8690 Computer Vision University of Missouri at Columbia
13
University of Missouri at Columbia
Hough Transform (Summary) HT is a voting algorithm: each point votes for all combinations of parameters which may have produced it if it were part of the target curve. The array of counters in parameter space can be regarded as a histogram. The final total of votes c(m) in a counter of coordinate m indicates the relative likelihood of the hypothesis “a curve with parameter set m exists in the image”. CS8690 Computer Vision University of Missouri at Columbia
14
University of Missouri at Columbia
Hough Transform (Summary) HT can also be regarded as pattern matching: the class of curves identified by the parameter space is the class of patterns. HT is more efficient than direct template matching which compares all possible appearances of the pattern with the image. CS8690 Computer Vision University of Missouri at Columbia
15
University of Missouri at Columbia
Hough Transform Advantages First all points are processed independently, it copes well with occlusion (if the noise does not result in peaks as high as those created by the shortest true lines). Second it is relatively robust to noise, as spurious points are unlikely to contribute consistently to any single bin, just generate background noise. Third, it detects multiple instances of a model in a single pass. CS8690 Computer Vision University of Missouri at Columbia
16
likelihood - but choice of model is important
standard least-squares Line fitting can be max. likelihood - but choice of model is important This is figure 15.5; I didn’t have the energy to write the algebra around that figure out on a slide, but the point is well worth making. total least-squares CS8690 Computer Vision University of Missouri at Columbia
17
Who came from which line?
Assume we know how many lines there are - but which lines are they? easy, if we know who came from which line Three strategies Incremental line fitting K-means Probabilistic (later!) CS8690 Computer Vision University of Missouri at Columbia
18
University of Missouri at Columbia
Incremental line fitting is a really useful idea; it’s the easiest way to fit a set of lines to a curve, and is often quite useful. CS8690 Computer Vision University of Missouri at Columbia
19
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
20
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
21
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
22
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
23
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
24
University of Missouri at Columbia
You should point out that a little additional cleverness will be required to avoid having lines that pass through one point, etc. CS8690 Computer Vision University of Missouri at Columbia
25
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
26
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
27
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
28
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
29
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
30
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
31
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
32
University of Missouri at Columbia
Robustness As we have seen, squared error can be a source of bias in the presence of noise points One fix is EM - we’ll do this shortly Another is an M-estimator Square nearby, threshold far away A third is RANSAC Search for good points CS8690 Computer Vision University of Missouri at Columbia
33
University of Missouri at Columbia
Least squares fit to the red points CS8690 Computer Vision University of Missouri at Columbia
34
University of Missouri at Columbia
Also a least squares fit. The problem is the single point on the right; the error for that point is so large that it drags the line away from the other points (the total error is lower if these points are some way from the line, because then the point on the line is not so dramatically far away). CS8690 Computer Vision University of Missouri at Columbia
35
University of Missouri at Columbia
Same problem as above CS8690 Computer Vision University of Missouri at Columbia
36
University of Missouri at Columbia
Detail of the previous slide - the line is actually quite far from the points. (All this is fig 15.7) CS8690 Computer Vision University of Missouri at Columbia
37
University of Missouri at Columbia
M-estimators Generally, minimize where is the residual CS8690 Computer Vision University of Missouri at Columbia
38
University of Missouri at Columbia
Figure The issue here is that one wants to replace (distance)^2 with something that looks like distance^2 for small distances, and is about constant for large distances. We use d^2/(d^2+s^2) for some parameter s, for different values of which the curve is plotted. CS8690 Computer Vision University of Missouri at Columbia
39
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
40
University of Missouri at Columbia
Fit to earlier data with an appropriate choice of s CS8690 Computer Vision University of Missouri at Columbia
41
University of Missouri at Columbia
Too small Here the parameter is too small, and the error for every point looks like distance^2 CS8690 Computer Vision University of Missouri at Columbia
42
University of Missouri at Columbia
Too large Here the parameter is too large, meaning that the error value is about constant for every point, and there is very little relationship between the line and the points. CS8690 Computer Vision University of Missouri at Columbia
43
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
44
University of Missouri at Columbia
RANSAC Issues How many times? Often enough that we are likely to have a good line How big a subset? Smallest possible What does close mean? Depends on the problem What is a good line? One where the number of nearby points is so big it is unlikely to be all outliers Choose a small subset uniformly at random Fit to that Anything that is close to result is signal; all others are noise Refit Do this many times and choose the best This algorithm contains concepts that are hugely useful; in my experience, everyone with some experience of applied problems who doesn’t yet know RANSAC is almost immediately able to use it to simplify some problem they’ve seen. CS8690 Computer Vision University of Missouri at Columbia
45
University of Missouri at Columbia
CS8690 Computer Vision University of Missouri at Columbia
46
Distance threshold Choose t so probability for inlier is α (e.g. 0.95)
Often empirically Zero-mean Gaussian noise σ then follows distribution with m=codimension of model (dimension+codimension=dimension space) Codimension Model t 2 1 line,F 3.84σ2 2 H,P 5.99σ2 3 T 7.81σ2 CS8690 Computer Vision University of Missouri at Columbia
47
How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 proportion of outliers e s 5% 10% 20% 25% 30% 40% 50% 2 3 5 6 7 11 17 4 9 19 35 13 34 72 12 26 57 146 16 24 37 97 293 8 20 33 54 163 588 44 78 272 1177 CS8690 Computer Vision University of Missouri at Columbia
48
Acceptable consensus set?
Typically, terminate when inlier ratio reaches expected ratio of inliers CS8690 Computer Vision University of Missouri at Columbia
49
Fitting curves other than lines
In principle, an easy generalisation The probability of obtaining a point, given a curve, is given by a negative exponential of distance squared In practice, rather hard It is generally difficult to compute the distance between a point and a curve More detail in the book. I typically draw on the whiteboard and wave my hands for this particular topic, rather than work from slides. CS8690 Computer Vision University of Missouri at Columbia
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.