Fitting.

Slides:



Advertisements
Similar presentations
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Advertisements

Computer Vision - A Modern Approach Set: Probability in segmentation Slides by D.A. Forsyth Missing variable problems In many vision problems, if some.
Fitting: The Hough transform
Lecture 5 Hough transform and RANSAC
Multiple View Geometry
Fitting lines. Fitting Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t.
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Computer Vision - A Modern Approach Set: Fitting Slides by D.A. Forsyth Fitting Choose a parametric object/some objects to represent a set of tokens Most.
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Fitting: Concepts and recipes. Review: The Hough transform What are the steps of a Hough transform? What line parametrization is best for the Hough transform?
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
Fitting.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Robust fitting Prof. Noah Snavely CS1114
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
CSE 185 Introduction to Computer Vision
Computer Vision - Fitting and Alignment
Fitting and Registration Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/14/12.
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
HOUGH TRANSFORM & Line Fitting Introduction  HT performed after Edge Detection  It is a technique to isolate the curves of a given shape / shapes.
EM and model selection CS8690 Computer Vision University of Missouri at Columbia Missing variable problems In many vision problems, if some variables.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Fitting: The Hough transform
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
Fitting image transformations Prof. Noah Snavely CS1114
CSE 185 Introduction to Computer Vision Feature Matching.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Fitting.
Detecting Image Features: Corner. Corners Given an image, denote the image gradient. C is symmetric with two positive eigenvalues. The eigenvalues give.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
HOUGH TRANSFORM.
Segmentation by fitting a model: Hough transform and least squares
Fitting: Voting and the Hough Transform
Miguel Tavares Coimbra
Line Fitting James Hayes.
Detection of discontinuity using
Fitting: The Hough transform
Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform
Lecture 7: Image alignment
A special case of calibration
Fitting Curve Models to Edges
CHAPTER 29: Multiple Regression*
Fitting.
Computer Vision - A Modern Approach
Photo by Carl Warner.
Photo by Carl Warner.
Missing variable problems
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
CSSE463: Image Recognition Day 26
776 Computer Vision Jan-Michael Frahm.
Morphological Operators
Introduction to Sensor Interpretation
CSE 185 Introduction to Computer Vision
CSSE463: Image Recognition Day 26
Introduction to Sensor Interpretation
Calibration and homographies
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Parameter estimation class 6
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

Fitting

University of Missouri at Columbia Fitting Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local can’t tell whether a set of points lies on a line by looking only at each point and the next. Three main questions: what object represents this set of tokens best? which of several objects gets which token? how many objects are there? (you could read line for object here, or circle, or ellipse or...) CS8690 Computer Vision University of Missouri at Columbia

Hough transform : straight lines implementation : 1. the parameter space is discretised 2. a counter is incremented at each cell where the lines pass 3. peaks are detected  CS8690 Computer Vision University of Missouri at Columbia

Hough transform : straight lines problem : unbounded parameter domain vertical lines require infinite a alternative representation: Each point will add a cosine function in the (,) parameter space  CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia tokens Figure 15.1, top half. Note that most points in the vote array are very dark, because they get only one vote. votes CS8690 Computer Vision University of Missouri at Columbia

Hough transform : straight lines Square : Circle :  CS8690 Computer Vision University of Missouri at Columbia

Hough transform : straight lines  CS8690 Computer Vision University of Missouri at Columbia

Mechanics of the Hough transform Construct an array representing q, d For each point, render the curve (q, d) into this array, adding one at each cell Difficulties how big should the cells be? (too big, and we cannot distinguish between quite different lines; too small, and noise causes lines to be missed) How many lines? count the peaks in the Hough array Who belongs to which line? tag the votes Hardly ever satisfactory in practice, because problems with noise and cell size defeat it CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia This is 15.1 lower half tokens votes CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia 15.2; main point is that lots of noise can lead to large peaks in the array CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia This is the number of votes that the real line of 20 points gets with increasing noise (figure15.3) CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Figure 15.4; as the noise increases in a picture without a line, the number of points in the max cell goes up, too CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Hough Transform (Summary) HT is a voting algorithm: each point votes for all combinations of parameters which may have produced it if it were part of the target curve. The array of counters in parameter space can be regarded as a histogram. The final total of votes c(m) in a counter of coordinate m indicates the relative likelihood of the hypothesis “a curve with parameter set m exists in the image”. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Hough Transform (Summary) HT can also be regarded as pattern matching: the class of curves identified by the parameter space is the class of patterns. HT is more efficient than direct template matching which compares all possible appearances of the pattern with the image. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Hough Transform Advantages First all points are processed independently, it copes well with occlusion (if the noise does not result in peaks as high as those created by the shortest true lines). Second it is relatively robust to noise, as spurious points are unlikely to contribute consistently to any single bin, just generate background noise. Third, it detects multiple instances of a model in a single pass. CS8690 Computer Vision University of Missouri at Columbia

likelihood - but choice of model is important standard least-squares Line fitting can be max. likelihood - but choice of model is important This is figure 15.5; I didn’t have the energy to write the algebra around that figure out on a slide, but the point is well worth making. total least-squares CS8690 Computer Vision University of Missouri at Columbia

Who came from which line? Assume we know how many lines there are - but which lines are they? easy, if we know who came from which line Three strategies Incremental line fitting K-means Probabilistic (later!) CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Incremental line fitting is a really useful idea; it’s the easiest way to fit a set of lines to a curve, and is often quite useful. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia You should point out that a little additional cleverness will be required to avoid having lines that pass through one point, etc. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Robustness As we have seen, squared error can be a source of bias in the presence of noise points One fix is EM - we’ll do this shortly Another is an M-estimator Square nearby, threshold far away A third is RANSAC Search for good points CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Least squares fit to the red points CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Also a least squares fit. The problem is the single point on the right; the error for that point is so large that it drags the line away from the other points (the total error is lower if these points are some way from the line, because then the point on the line is not so dramatically far away). CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Same problem as above CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Detail of the previous slide - the line is actually quite far from the points. (All this is fig 15.7) CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia M-estimators Generally, minimize where is the residual CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Figure 15.8. The issue here is that one wants to replace (distance)^2 with something that looks like distance^2 for small distances, and is about constant for large distances. We use d^2/(d^2+s^2) for some parameter s, for different values of which the curve is plotted. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Fit to earlier data with an appropriate choice of s CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Too small Here the parameter is too small, and the error for every point looks like distance^2 CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia Too large Here the parameter is too large, meaning that the error value is about constant for every point, and there is very little relationship between the line and the points. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia RANSAC Issues How many times? Often enough that we are likely to have a good line How big a subset? Smallest possible What does close mean? Depends on the problem What is a good line? One where the number of nearby points is so big it is unlikely to be all outliers Choose a small subset uniformly at random Fit to that Anything that is close to result is signal; all others are noise Refit Do this many times and choose the best This algorithm contains concepts that are hugely useful; in my experience, everyone with some experience of applied problems who doesn’t yet know RANSAC is almost immediately able to use it to simplify some problem they’ve seen. CS8690 Computer Vision University of Missouri at Columbia

University of Missouri at Columbia CS8690 Computer Vision University of Missouri at Columbia

Distance threshold Choose t so probability for inlier is α (e.g. 0.95) Often empirically Zero-mean Gaussian noise σ then follows distribution with m=codimension of model (dimension+codimension=dimension space) Codimension Model t 2 1 line,F 3.84σ2 2 H,P 5.99σ2 3 T 7.81σ2 CS8690 Computer Vision University of Missouri at Columbia

How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 proportion of outliers e s 5% 10% 20% 25% 30% 40% 50% 2 3 5 6 7 11 17 4 9 19 35 13 34 72 12 26 57 146 16 24 37 97 293 8 20 33 54 163 588 44 78 272 1177 CS8690 Computer Vision University of Missouri at Columbia

Acceptable consensus set? Typically, terminate when inlier ratio reaches expected ratio of inliers CS8690 Computer Vision University of Missouri at Columbia

Fitting curves other than lines In principle, an easy generalisation The probability of obtaining a point, given a curve, is given by a negative exponential of distance squared In practice, rather hard It is generally difficult to compute the distance between a point and a curve More detail in the book. I typically draw on the whiteboard and wave my hands for this particular topic, rather than work from slides. CS8690 Computer Vision University of Missouri at Columbia