Presentation is loading. Please wait.

Presentation is loading. Please wait.

Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.

Similar presentations


Presentation on theme: "Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem."— Presentation transcript:

1 Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

2 Fitting: find the parameters of a model that best fit the data Alignment: find the parameters of the transformation that best align matched points

3 Example: Computing vanishing points Slide from Silvio Savarese

4 H Example: Estimating an homographic transformation Slide from Silvio Savarese

5 Example: Estimating “fundamental matrix” that corresponds two views Slide from Silvio Savarese

6 A Example: fitting an 2D shape template Slide from Silvio Savarese

7 Example: fitting a 3D object model Slide from Silvio Savarese

8 Critical issues: noisy data Slide from Silvio Savarese

9 A Critical issues: intra-class variability “All models are wrong, but some are useful.” Box and Draper 1979 Slide from Silvio Savarese

10 H Critical issues: outliers Slide from Silvio Savarese

11 Critical issues: missing data (occlusions) Slide from Silvio Savarese

12 Fitting and Alignment Design challenges – Design a suitable goodness of fit measure Similarity should reflect application goals Encode robustness to outliers and noise – Design an optimization method Avoid local optima Find best parameters quickly Slide from Derek Hoiem

13 Fitting and Alignment: Methods Global optimization / Search for parameters – Least squares fit – Robust least squares – Iterative closest point (ICP) Hypothesize and test – Generalized Hough transform – RANSAC Slide from Derek Hoiem

14 Simple example: Fitting a line Slide from Derek Hoiem

15 Least squares line fitting Data: (x 1, y 1 ), …, (x n, y n ) Line equation: y i = m x i + b Find ( m, b ) to minimize (x i, y i ) y=mx+b Matlab: p = A \ y; Modified from S. Lazebnik

16 Problem with “vertical” least squares Not rotation-invariant Fails completely for vertical lines Slide from S. Lazebnik

17 Total least squares If (a 2 +b 2 =1) then Distance between point (x i, y i ) is |ax i + by i + c| (x i, y i ) ax+by+c=0 Unit normal: N=(a, b) Slide modified from S. Lazebnik proof: http://mathworld.wolfram.com/Point- LineDistance2-Dimensional.html http://mathworld.wolfram.com/Point- LineDistance2-Dimensional.html

18 Total least squares If (a 2 +b 2 =1) then Distance between point (x i, y i ) is |ax i + by i + c| Find (a, b, c) to minimize the sum of squared perpendicular distances (x i, y i ) ax+by+c=0 Unit normal: N=(a, b) Slide modified from S. Lazebnik

19 Total least squares Find (a, b, c) to minimize the sum of squared perpendicular distances (x i, y i ) ax+by+c=0 Unit normal: N=(a, b) Solution is eigenvector corresponding to smallest eigenvalue of A T A See details on Raleigh Quotient: http://en.wikipedia.org/wiki/Rayleigh_quotienthttp://en.wikipedia.org/wiki/Rayleigh_quotient Slide modified from S. Lazebnik

20 Recap: Two Common Optimization Problems Problem statementSolution Problem statementSolution (matlab)

21 Least squares: Robustness to noise Least squares fit to the red points: Slides from Svetlana Lazebnik

22 Least squares: Robustness to noise Least squares fit with an outlier: Problem: squared error heavily penalizes outliers

23 Search / Least squares conclusions Good Clearly specified objective Optimization is easy (for least squares) Bad Not appropriate for non-convex objectives – May get stuck in local minima Sensitive to outliers – Bad matches, extra points Doesn’t allow you to get multiple good fits – Detecting multiple objects, lines, etc. Slide from Derek Hoiem

24 Robust least squares (to deal with outliers) General approach: minimize u i (x i, θ) – residual of i th point w.r.t. model parameters θ ρ – robust function with scale parameter σ The robust function ρ Favors a configuration with small residuals Constant penalty for large residuals Slide from S. Savarese

25 Choosing the scale: Just right The effect of the outlier is minimized

26 The error value is almost the same for every point and the fit is very poor Choosing the scale: Too small

27 Choosing the scale: Too large Behaves much the same as least squares

28 Robust estimation: Details Robust fitting is a nonlinear optimization problem that must be solved iteratively Least squares solution can be used for initialization Adaptive choice of scale: approx. 1.5 times median residual (F&P, Sec. 15.5.1)

29 Hypothesize and test 1.Propose parameters – Try all possible – Each point votes for all consistent parameters – Repeatedly sample enough points to solve for parameters 2.Score the given parameters – Number of consistent points, possibly weighted by distance 3.Choose from among the set of parameters – Global or local maximum of scores 4.Possibly refine parameters using inliers

30 x y b m y = m x + b Hough transform Given a set of points, find the curve or line that explains the data points best P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959 Hough space Slide from S. Savarese

31 x y b m x ym 353322 37111043 231452 210133 b Hough transform Slide from S. Savarese

32 x y Hough transform Issue : parameter space [m,b] is unbounded… P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959 Hough space Use a polar representation for the parameter space Slide from S. Savarese

33 featuresvotes Issue: Grid size needs to be adjusted… Hough transform - experiments Noisy data Slide from S. Savarese

34 Generalized Hough transform We want to find a template defined by its reference point (center) and several distinct types of landmark points in stable spatial configuration c Template

35 Generalized Hough transform Template representation: for each type of landmark point, store all possible displacement vectors towards the center Model Template

36 Generalized Hough transform Detecting the template: For each feature in a new image, look up that feature type in the model and vote for the possible center locations associated with that type in the model Model Test image

37 Application in recognition Index displacements by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model training image visual codeword with displacement vectors

38 Application in recognition Index displacements by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model test image

39 Hough transform conclusions Good Robust to outliers: each point votes separately Fairly efficient (often faster than trying all sets of parameters) Provides multiple good fits Bad Some sensitivity to noise Bin size trades off between noise tolerance, precision, and speed/memory – Can be hard to find sweet spot Not suitable for more than a few parameters – grid size grows exponentially Common applications Line fitting (also circles, ellipses, etc.) Object instance recognition (parameters are affine transform) Object category recognition (parameters are position/scale)

40 such that: Model parameters RANSAC Fischler & Bolles in ‘81. (RANdom SAmple Consensus) : Learning technique to estimate parameters of a model by random sampling of observed data Source: Savarese

41 RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence Fischler & Bolles in ‘81. (RANdom SAmple Consensus) :

42 RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model (#=2) 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence Illustration by Savarese Line fitting example

43 RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model (#=2) 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence Line fitting example

44 RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model (#=2) 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence Line fitting example

45 RANSAC Algorithm: 1. Sample (randomly) the number of points required to fit the model (#=2) 2. Solve for model parameters using samples 3. Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence

46 Choosing the parameters Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) proportion of outliers e s5%10%20%25%30%40%50% 2235671117 33479111935 435913173472 54612172657146 64716243797293 748203354163588 8592644782721177 Source: M. Pollefeys

47 RANSAC conclusions Good Robust to outliers Applicable for larger number of parameters than Hough transform Parameters are easier to choose than Hough transform Bad Computational time grows quickly with fraction of outliers and number of parameters Not good for getting multiple fits Common applications Computing a homography (e.g., image stitching) Estimating fundamental matrix (relating two views)

48 What if you want to align but have no prior matched pairs? Hough transform and RANSAC not applicable Important applications Medical imaging: match brain scans or contours Robotics: match point clouds Slide from Derek Hoiem

49 Iterative Closest Points (ICP) Algorithm Goal: estimate transform between two dense sets of points 1.Assign each point in {Set 1} to its nearest neighbor in {Set 2} 2.Estimate transformation parameters – e.g., least squares or robust least squares 3.Transform the points in {Set 1} using estimated parameters 4.Repeat steps 1-3 until change is very small Slide from Derek Hoiem

50 Example: solving for translation A1A1 A2A2 A3A3 B1B1 B2B2 B3B3 Given matched points in {A} and {B}, estimate the translation of the object

51 Example: solving for translation A1A1 A2A2 A3A3 B1B1 B2B2 B3B3 Least squares solution (t x, t y ) 1.Write down objective function 2.Derived solution a)Compute derivative b)Compute solution 3.Computational solution a)Write in form Ax=b b)Solve using pseudo-inverse or eigenvalue decomposition

52 Example: solving for translation A1A1 A2A2 A3A3 B1B1 B2B2 B3B3 RANSAC solution (t x, t y ) 1.Sample a set of matching points (1 pair) 2.Solve for transformation parameters 3.Score parameters with number of inliers 4.Repeat steps 1-3 N times Problem: outliers A4A4 A5A5 B5B5 B4B4

53 Example: solving for translation A1A1 A2A2 A3A3 B1B1 B2B2 B3B3 Hough transform solution (t x, t y ) 1.Initialize a grid of parameter values 2.Each matched pair casts a vote for consistent values 3.Find the parameters with the most votes 4.Solve using least squares with inliers A4A4 A5A5 A6A6 B4B4 B5B5 B6B6 Problem: outliers, multiple objects, and/or many-to-one matches

54 Example: solving for translation (t x, t y ) Problem: no initial guesses for correspondence ICP solution 1.Find nearest neighbors for each point 2.Compute transform using matches 3.Move points using transform 4.Repeat steps 1-3 until convergence


Download ppt "Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem."

Similar presentations


Ads by Google