PCA Extension By Jonash.

Slides:



Advertisements
Similar presentations
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Advertisements

CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
3D Geometry for Computer Graphics
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Numeriska beräkningar i Naturvetenskap och Teknik Today’s topic: Approximations Least square method Interpolations Fit of polynomials Splines.
Support vector machine
Tyler Ambroziak Ryan Fox Cs /3/10 Virtual Barber.
Systems of Linear Equations (see Appendix A.6, Trucco & Verri) CS485/685 Computer Vision Prof. George Bebis.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
EE 290A: Generalized Principal Component Analysis Lecture 4: Generalized Principal Component Analysis Sastry & Yang © Spring, 2011EE 290A, University of.
EE 290A: Generalized Principal Component Analysis Lecture 5: Generalized Principal Component Analysis Sastry & Yang © Spring, 2011EE 290A, University of.
MASKS © 2004 Invitation to 3D vision Lecture 8 Segmentation of Dynamical Scenes.
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Basis of a Vector Space (11/2/05)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
Unsupervised Learning
Lecture 12 Least Square Approximation Shang-Hua Teng.
Linear Discriminant Functions Chapter 5 (Duda et al.)
Dimensionality Reduction
Dimensionality Reduction. Multimedia DBs Many multimedia applications require efficient indexing in high-dimensions (time-series, images and videos, etc)
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Solving Quadratic Equations by Completing the Square
Segmentation Techniques Luis E. Tirado PhD qualifying exam presentation Northeastern University.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
MAT 4725 Numerical Analysis Section 8.2 Orthogonal Polynomials and Least Squares Approximations (Part II)
EE 290A: Generalized Principal Component Analysis Lecture 2 (by Allen Y. Yang): Extensions of PCA Sastry & Yang © Spring, 2011EE 290A, University of California,
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
CSC321: Neural Networks Lecture 12: Clustering Geoffrey Hinton.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Lecture 4 Linear machine
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Curve Fitting Introduction Least-Squares Regression Linear Regression Polynomial Regression Multiple Linear Regression Today’s class Numerical Methods.
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
Machine learning optimization Usman Roshan. Machine learning Two components: – Modeling – Optimization Modeling – Generative: we assume a probabilistic.
半年工作小结 报告人:吕小惠 2011 年 8 月 25 日. 报告提纲 一.学习了 Non-negative Matrix Factorization convergence proofs 二.学习了 Sparse Non-negative Matrix Factorization 算法 三.学习了线性代数中有关子空间等基础知.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 9: Review.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Motion Segmentation CAGD&CG Seminar Wanqiang Shen
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Introduction to several works and Some Ideas Songcan Chen
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Linear Algebra Curve Fitting. Last Class: Curve Fitting.
Estimating standard error using bootstrap
ROBUST SUBSPACE LEARNING FOR VISION AND GRAPHICS
Motion Segmentation with Missing Data using PowerFactorization & GPCA
Part I 1 Title 2 Motivation 3 Problem statement 4 Brief review of PCA
Segmentation of Dynamic Scenes
René Vidal Center for Imaging Science
René Vidal Time/Place: T-Th 4.30pm-6pm, Hodson 301
Segmentation of Dynamic Scenes
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
Segmentation of Dynamic Scenes from Image Intensities
Generalized Principal Component Analysis CVPR 2008
Properties Of the Quadratic Performance Surface
Singular Value Decomposition
EE 290A Generalized Principal Component Analysis
Object Modeling with Layers
Course Outline MODEL INFORMATION COMPLETE INCOMPLETE
Segmentation of Dynamical Scenes
Feature space tansformation methods
Back to equations of geometric transformations
Approximation of Functions
Presentation transcript:

PCA Extension By Jonash

Outline Robust PCA Generalized PCA Clustering points on a line Clustering lines on a plane Clustering hyperplanes in a space

Robust PCA Rrbust Principal Component Analysis for Computer Vision Fernando De la Torre Mochael J. Black CS, Brown University

PCA is Least-Square Fit

PCA is Least-Square Fit

Robust Statistics Recover the best fit for the majority of the data Detect and reject outliers

Robust PCA

Robust PCA

Robust PCA Training images

Robust PCA Naïve PCA Simply reject Robust PCA

RPCA BBTdi di B BTdi = ci In traditional PCA, we minimize Σni = 0 (di – B BT di)2 = Σni = 0 (di - Bci)2 EM PCA Limσ -> 0(D = BC + σ2I) E-step C = (BTB)-1BTD M-step B = DCT(CCT)-1 BBTdi di B BTdi = ci

RPCA Xu and Yuille [1995] tries to minimize Σni = 1 [ V i(di – B ci)2 + n(1-Vi) ] Hard to solve (continuous + discrete)

RPCA Gabriel and Zamir [1979] tries to minimize Σni = 1 Σdp = 1 [ wpi(dpi – Bci)2] Impratical for high dimension “Low rank approximation of matrices by least squares with any choice of weights” 1979

RPCA Idea is to use a robust function ρ Geman-McClureρ (x,σ) = x2/(x2 + σ2) Σni = 1 Σdp = 1 ρ[ (dpi–μp –Σkj = 1 bpjcji), σp] Approximated by local quadratic function Use gradient descent The rest is nothing but heuristics

RPCA

Robust PCA - Experiment 256 training images (120x160) Obtain 20 RPCA basis 3 hrs on 900MHz Pentinum III in Matlab

Outline Robust PCA Generalized PCA Clustering points on a line Clustering lines on a plane Clustering hyperplanes in a space

Generalized PCA Generalized Principal Component Analysis Rene Vidal Yi Ma Shankar Sastry UC Berkley and UIUC

GPCA

GPCA Example 1

GPCA Example 2

GPCA Example 3

GPCA Goals # of subspaces and their dimension Basis for subspace Segmentation of data

GPCA Ideas Union of subspaces = certain polynomials Noise free case

Outline Robust PCA Generalized PCA Clustering points on a line Clustering lines on a plane Clustering hyperplanes in a space

GPCA 1D Case

GPCA 1D Case Cont’d

To have a unique solution, GPCA 1D Case Cont’d MN = n+1 unknowns To have a unique solution, rank(Vn) = n = Mn- 1

GPCA 1D Example n = 2 groups pn(x) = ( x – μ1) ( x – μ2) No polynomial of degree 1 Infinite polynomial of degree 3 pn(x) = x2 + c1x + c2 => Polynomial factor

Outline Robust PCA Generalized PCA Clustering points on a line Clustering lines on a plane Clustering hyperplanes in a space

GPCA 2D Case L j = { X = [x, y]T: bj1x + bj2y = 0 } (b11x + b12y = 0) or (b21x + b22y = 0)…

GPCA 2D Case Cont’d (b11x + b12y = 0) or (b21x + b22y = 0)… Pn(x) = (b11x + b12y)…(bn1x + bn2y) = 0 = Σck xn-k yk

GPCA 2D Case Cont’d Take n = 2 for example… p2(x) = (b11x + b12y)(b21x + b22y) ▽p2(x) = (b21x + b22y)b1 + (b11x + b12y)b2 , bj = [bj1, bj2] T if x ~ L1, then ▽p2(x) ~ b1, otherwise ~ b2

GPCA 2D Case Cont’d Given that {yj ε Lj}, the normal vector of Lj is bj ~ ▽pn(yj) 3 things… Determine “ n ” as min{ j: rank(Vj) = j } Solve cn for Vncn = 0 Find normal vector bj

Outline Robust PCA Generalized PCA Clustering points on a line Clustering lines on a plane Clustering hyperplanes in a space

GPCA Hyperplanes Still assume d1 = … = dn = d = D – 1 Sj = { bjTx = bj1x1 + bj2x2 + … + bjDxD = 0}

GPCA Hyperplanes MN = C(D+n-1, D)

GPCA Hyperplanes

GPCA Hyperplanes Since we know n, we can solve for ck ck => bk by ▽pn(x) If we know yj on each Sj, finding bj will be easy

GPCA Hyperplanes One point yj on each hyperplane Sj Consider a random line L = t * v + x0 Obtain yj by intersecting L and Sj yj = tj * v + x0 Find roots tj by … Pn(t v + xo)

GPCA Hyperplanes Summarize We want to find n to solve for c To get b (normal) for each S, find ▽pn(x) To get label j, solve pn(yj = tj * v + x0) = 0

One More Thing

One More Thing Previously we assume d1 = … =dn= D – 1 Actually we cannot assume that… Please read section 4.2 & 4.3 … by yourself Discuss how to recursively reduce dimension