3D Geometry for Computer Graphics

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Component Analysis (Review)
Surface normals and principal component analysis (PCA)
Geometry Primer Lines and rays Planes Spheres Frustums Triangles Polygon Polyhedron.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Eigenvalues and eigenvectors
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Computer Graphics Recitation 6. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Chapter 3 Determinants and Matrices
Face Recognition Using Eigenfaces
OBBTree: A Hierarchical Structure for Rapid Interference Detection Gottschalk, M. C. Lin and D. ManochaM. C. LinD. Manocha Department of Computer Science,
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Olga Sorkine’s slides Tel Aviv University. 2 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis).
Orthogonality and Least Squares
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
Techniques for studying correlation and covariance structure
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
5.1 Orthogonality.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Next. A Big Thanks Again Prof. Jason Bohland Quantitative Neuroscience Laboratory Boston University.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
SVD: Singular Value Decomposition
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Signal & Weight Vector Spaces
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Principal Component Analysis (PCA)
Eigenvalues and Eigenvectors
LECTURE 10: DISCRIMINANT ANALYSIS
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
CS485/685 Computer Vision Dr. George Bebis
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Feature space tansformation methods
Linear Algebra Lecture 38.
Symmetric Matrices and Quadratic Forms
Quantum Two.
Principal Components What matters most?.
LECTURE 09: DISCRIMINANT ANALYSIS
Principal Component Analysis
Lecture 20 SVD and Its Applications
Symmetric Matrices and Quadratic Forms
Presentation transcript:

3D Geometry for Computer Graphics Class 3

Last week - eigendecomposition 4/16/2017 We want to learn how the matrix A works: A

Last week - eigendecomposition If we look at arbitrary vectors, it doesn’t tell us much. A

Spectra and diagonalization If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis). A A = UUT = Aui = i ui

The plan for today First we’ll see some applications of PCA – Principal Component Analysis that uses spectral decomposition. Then look at the theory. And then – the assignment.

PCA – the general idea PCA finds an orthogonal basis that best represents given data set. The sum of distances2 from the x’ axis is minimized. y y’ x’ x

PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of distances2) z 3D point set in standard basis y x

PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of distances2) 3D point set in standard basis

Application: finding tight bounding box An axis-aligned bounding box: agrees with the axes y maxY minX maxX x minY

Application: finding tight bounding box Oriented bounding box: we find better axes! x’ y’

Application: finding tight bounding box This is not the optimal bounding box z y x

Application: finding tight bounding box Oriented bounding box: we find better axes!

Usage of bounding boxes (bounding volumes) Serve as very simple “approximation” of the object Fast collision detection, visibility queries Whenever we need to know the dimensions (size) of the object The models consist of thousands of polygons To quickly test that they don’t intersect, the bounding boxes are tested Sometimes a hierarchy of BB’s is used The tighter the BB – the less “false alarms” we have

Scanned meshes

Point clouds Scanners give us raw point cloud data How to compute normals to shade the surface? normal

Point clouds Local PCA, take the third vector

Notations Denote our data points by x1, x2, …, xn  Rd

The origin of the new axes The origin is zero-order approximation of our data set (a point) It will be the center of mass: It can be shown that:

Scatter matrix Y YT Denote yi = xi – m, i = 1, 2, …, n where Y is dn matrix with yk as columns (k = 1, 2, …, n) Y YT

Variance of projected points In a way, S measures variance (= scatterness) of the data in different directions. Let’s look at a line L through the center of mass m, and project our points xi onto it. The variance of the projected points x’i is: L L L L Original set Small variance Large variance

Variance of projected points Given a direction v, ||v|| = 1, the projection of xi onto L = m + vt is: xi x’i v L m

Variance of projected points So,

Directions of maximal variance So, we have: var(L) = <Sv, v> Theorem: Let f : {v  Rd | ||v|| = 1}  R, f (v) = <Sv, v> (and S is a symmetric matrix). Then, the extrema of f are attained at the eigenvectors of S. So, eigenvectors of S are directions of maximal/minimal variance!

Summary so far We take the centered data vectors y1, y2, …, yn  Rd Construct the scatter matrix S measures the variance of the data points Eigenvectors of S are directions of maximal variance.

Scatter matrix - eigendecomposition S is symmetric  S has eigendecomposition: S = VVT S 1 v1 2 v2 = v1 v2 vd d vd The eigenvectors form orthogonal basis

Principal components Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). If the eigenvalues are more or less the same – there is no preferable direction. Note: the eigenvalues are always non-negative. Think why…

Principal components S looks like this: S looks like this: There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this:  is close to zero, much smaller than .

How to use what we got For finding oriented bounding box – we simply compute the bounding box with respect to the axes defined by the eigenvectors. The origin is at the mean point m. v3 v2 v1

For approximation y v2 v1 x y y x x This line segment approximates the original data set The projected data set approximates the original data set

For approximation In general dimension d, the eigenvalues are sorted in descending order: 1  2  …  d The eigenvectors are sorted accordingly. To get an approximation of dimension d’ < d, we take the d’ first eigenvectors and look at the subspace they span (d’ = 1 is a line, d’ = 2 is a plane…)

For approximation To get an approximating set, we project the original data points onto the chosen subspace: xi = m + 1v1 + 2v2 +…+ d’vd’ +…+dvd Projection: xi’ = m + 1v1 + 2v2 +…+ d’vd’ +0vd’+1+…+ 0 vd

Good luck with Ex1

Technical remarks: i  0, i = 1,…,d (such matrices are called positive semi-definite). So we can indeed sort by the magnitude of i Theorem: i  0  <Sv, v>  0 v Proof: Therefore, i  0  <Sv, v>  0 v