Download presentation
Presentation is loading. Please wait.
1
Computer Graphics Recitation 11
2
2 The plan today Least squares approach General / Polynomial fitting Linear systems of equations Local polynomial surface fitting
3
3 y = f (x) Motivation Given data points, fit a function that is “close” to the points y x P i = (x i, y i )
4
4 Motivation Local surface fitting to 3D points
5
5 Line fitting Orthogonal offsets minimization – we learned x y P i = (x i, y i )
6
6 Line fitting The origin of the line is the center of mass The direction v is the eigenvector corresponding to the largest eigenvalue of the scatter matrix S (2 2) In 2D this requires solving a square characteristic equation In higher dimensions – higher order equations…
7
7 Line fitting y-offsets minimization x y P i = (x i, y i )
8
8 Line fitting Find a line y = ax + b that minimizes Err(a, b) = [y i – (ax i + b)] 2 Err is quadratic in the unknown parameters a, b Another option would be, for example: But – it is not differentiable, harder to minimize…
9
9 Line fitting – LS minimization To find optimal a, b we differentiate Err(a, b) : Err(a, b) = (–2x i )[y i – (ax i + b)] = 0 Err(a, b) = (–2)[y i – (ax i + b)] = 0
10
10 Line fitting – LS minimization We get two linear equations for a, b : (–2x i )[y i – (ax i + b)] = 0 (–2)[y i – (ax i + b)] = 0
11
11 Line fitting – LS minimization We get two linear equations for a, b : [x i y i – ax i 2 – bx i ] = 0 [y i – ax i – b] = 0
12
12 Line fitting – LS minimization We get two linear equations for a, b : ( x i 2 ) a + ( x i ) b = x i y i ( x i ) a + ( 1) b = y i
13
13 Line fitting – LS minimization Solve for a, b using e.g. Gauss elimination Question: why the solution is minimum for the error function? Err(a, b) = [y i – (ax i + b)] 2
14
14 Fitting polynomials y x
15
15 Fitting polynomials Decide on the degree of the polynomial, k Want to fit f(x) = a k x k + a k-1 x k-1 + … a 1 x+ a 0 Minimize: Err(a 0, a 1, …, a k ) = [y i – (a k x k +a k-1 x k-1 + …+a 1 x+a 0 )] 2 Err(a 0,…,a k ) = (– 2x m )[y i – (a k x k +a k-1 x k-1 +…+ a 0 )] = 0
16
16 Fitting polynomials We get a linear system of k+1 in k+1 variables
17
17 General parametric fitting We can use this approach to fit any function f(x) Specified by parameters a, b, c, … The expression f(x) linearly depends on the parameters a, b, c, …
18
18 General parametric fitting Want to fit function f abc… (x) to data points (x i, y i ) Define Err(a,b,c,…) = [y i – f abc… (x i )] 2 Solve the linear system
19
19 General parametric fitting It can even be some crazy function like Or in general:
20
20 Solving linear systems in LS sense Let’s look at the problem a little differently: We have data points (x i, y i ) We want the function f(x) to go through the points: i =1, …, n: y i = f(x i ) Strict interpolation is in general not possible In polynomials: n+1 points define a unique interpolation polynomial of degree n. So, if we have 1000 points and want a cubic polynomial, we probably won’t find it…
21
21 Solving linear systems in LS sense We have an over-determined linear system n k : f(x 1 ) = 1 f 1 (x 1 ) + 2 f 2 (x 1 ) + … + k f k (x 1 ) = y 1 f(x 2 ) = 1 f 1 (x 2 ) + 2 f 2 (x 2 ) + … + k f k (x 2 ) = y 2 … f(x n ) = 1 f 1 (x n ) + 2 f 2 (x n ) + … + k f k (x n ) = y n
22
22 Solving linear systems in LS sense In matrix form:
23
23 Solving linear systems in LS sense In matrix form: Av = y
24
24 Solving linear systems in LS sense More constrains than variables – no exact solutions generally exist We want to find something that is an “approximate solution”:
25
25 Finding the LS solution v R k Av R n As we vary v, Av varies over the linear subspace of R n spanned by the columns of A: Av = A2A2 A1A1 AkAk 1 2. k = 1 A1A1 A2A2 AkAk + 2 +…+ k
26
26 Finding the LS solution We want to find the closest Av to y : Subspace spanned by columns of A y RnRn Av closest to y
27
27 Finding the LS solution The vector Av closest to y satisfies: (Av – y) {subspace of A ’s columns} column A i, = 0 i, A i T (Av – y) = 0 A T (Av – y) = 0 (A T A)v = A T y These are called the normal equations
28
28 Finding the LS solution We got a square symmetric system (A T A)v = A T y (n n) If A has full rank (the columns of A are linearly independent) then (A T A) is invertible.
29
29 Weighted least squares Sometimes the problem also has weights to the constraints:
30
30 Local surface fitting to 3D points Normals? Lighting? Upsampling?
31
31 Local surface fitting to 3D points Locally approximate a polynomial surface from points
32
32 Fitting local polynomial X Y Z Reference plane Fit a local polynomial around a point P P
33
33 Fitting local polynomial surface Compute a reference plane that fits the points close to P Use the local basis defined by the normal to the plane! z x y
34
34 Fitting local polynomial surface Fit polynomial z = p(x,y) = ax 2 + bxy + cy 2 + dx + ey + f z x y
35
35 Fitting local polynomial surface Fit polynomial z = p(x,y) = ax 2 + bxy + cy 2 + dx + ey + f z x y
36
36 Fitting local polynomial surface Fit polynomial z = p(x,y) = ax 2 + bxy + cy 2 + dx + ey + f z x y
37
37 Fitting local polynomial surface Again, solve the system in LS sense: ax 1 2 + bx 1 y 1 + cy 1 2 + dx 1 + ey 1 + f = z 1 ax 2 2 + bx 2 y 2 + cy 2 2 + dx 2 + ey 2 + f = z 1... ax n 2 + bx n y n + cy n 2 + dx n + ey n + f = z n Minimize ||z i – p(x i, y i )|| 2
38
38 Fitting local polynomial surface Also possible (and better) to add weights: w i ||z i – p(x i, y i )|| 2, w i > 0 The weights get smaller as the distance from the origin point grows.
39
39 Geometry compression using relative coordinates Given a mesh: Connectivity Geometry – (x, y, z) of each vertex
40
40 Geometry compression using relative coordinates The size of the geometry is large (compared to connectivity) (x, y, z) coordinates are hard to compress Floating-point numbers – have to quantize No correlation
41
41 Geometry compression using relative coordinates Represent each vertex with relative coordinates: average of the neighbours the relative coordinate vector
42
42 Geometry compression using relative coordinates We call them –coordinates: average of the neighbours the relative coordinate vector
43
43 Geometry compression using relative coordinates When the mesh is smooth, the –coordinates are small. –coordinates can be better compressed.
44
44 Geometry compression using relative coordinates Matrix form to compute the –coordinates:
45
45 Geometry compression using relative coordinates Matrix form to compute the –coordinates: The same L matrix for y and z … L is called the Laplacian of the mesh
46
46 Geometry compression using relative coordinates How do we restore the (x, y, z) from the –coordinates? Need to solve the linear system: Lx = (x)
47
47 Geometry compression using relative coordinates Lx = (x) But: L is singular (x) contains quantization error
48
48 Geometry compression using relative coordinates Lx = (x) Solution: choose some anchor vertices whose (x, y, z) position is known (in addition to )
49
49 Geometry compression using relative coordinates We add the anchor vertices to our linear system: constrained anchor vertices L
50
50 Geometry compression using relative coordinates Now we have more equations than unknowns Solve in least squares sense!
51
See you next time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.