Face Transfer with Multilinear Models

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Face Recognition Face Recognition Using Eigenfaces K.RAMNATH BITS - PILANI.
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) Dimensionality Reductions or data projections Random projections.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Hinrich Schütze and Christina Lioma
Principal Component Analysis
3D Geometry for Computer Graphics
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
3D Geometry for Computer Graphics
Computer vision: models, learning and inference
Facial Type, Expression, and Viseme Generation Josh McCoy, James Skorupski, and Jerry Yee.
HCC class lecture 14 comments John Canny 3/9/05. Administrivia.
Facial Type, Expression, and Viseme Generation Josh McCoy, James Skorupski, and Jerry Yee.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
SINGULAR VALUE DECOMPOSITION (SVD)
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Linear Models for Classification
Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.
Principle Component Analysis and its use in MA clustering Lecture 12.
Fall 1999 Copyright © R. H. Taylor Given a linear systemAx -b = e, Linear Least Squares (sometimes written Ax  b) We want to minimize the sum.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
PRESENT BY BING-HSIOU SUNG A Multilinear Singular Value Decomposition.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
CSE 554 Lecture 8: Alignment
Ch 12. Continuous Latent Variables ~ 12
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Probability Theory and Parameter Estimation I
ROBUST SUBSPACE LEARNING FOR VISION AND GRAPHICS
Generative verses discriminative classifier
LECTURE 10: DISCRIMINANT ANALYSIS
Dimensionality Reduction
René Vidal and Xiaodong Fan Center for Imaging Science
Lecture 8:Eigenfaces and Shared Features
Dimensionality reduction
Parameter estimation class 5
Application of Independent Component Analysis (ICA) to Beam Diagnosis
Outline Multilinear Analysis
LSI, SVD and Data Management
Structure from motion Input: Output: (Tomasi and Kanade)
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
A Hybrid PCA-LDA Model for Dimension Reduction Nan Zhao1, Washington Mio2 and Xiuwen Liu1 1Department of Computer Science, 2Department of Mathematics Florida.
Machine Learning for Signal Processing Linear Gaussian Models
Course Outline MODEL INFORMATION COMPLETE INCOMPLETE
Disadvantages of Discrete Neurons
Probabilistic Models with Latent Variables
Principal Component Analysis
Christopher Crawford PHY
مدلسازي تجربي – تخمين پارامتر
Recitation: SVD and dimensionality reduction
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Logistic Regression.
LECTURE 09: DISCRIMINANT ANALYSIS
Dynamic modeling of gene expression data
Principal Component Analysis
CS590I: Information Retrieval
Game Programming Algorithms and Techniques
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Structure from motion Input: Output: (Tomasi and Kanade)
CSLT ML Summer Seminar (2)
Presentation transcript:

Face Transfer with Multilinear Models Daniel Vlasic & Jovan Popovic CSAIL MIT Matthew Brand & Hanspeter Pfister MERL

Outline Introduction to Multilinear Model Multilinear Face Model Face Transfer

A = (U(2)x(U(1)xB)T)T A =B x1U(1) x2 U(2) A B = x AT x = I2 X1 X2 Y1 Y2 Y1-Y2 Y1 Y2 X1-Y1 X2-Y2 (X1-Y1) – (X2-Y2) X1 X2 1 X1 X2 Y1 Y2 = 1 x Y1 Y2 1 -1 X1-Y1 X2-Y2 U(1) A = (U(2)x(U(1)xB)T)T 1 X1 Y1 X1-Y1 AT 1 x = A =B x1U(1) x2 U(2) X2 Y2 X2-Y2 1 -1 U(2)

Linear Model J2 U(2) I1 J2 A I1 U(1) I2 J1 = B I1 J1

A = B x1U(1)x2U(2)x3U(3)…xnU(n)…xNU(N) Multilinear Model Generalization of linear model A = B x1U(1)x2U(2)x3U(3)…xnU(n)…xNU(N) Orthogonal Transformation Data Tensor Core Tensor

A = B x1 U(1)x2 U(2)x3 U(3)…xn U(n)… B xn U(n) : U(n) * B(n) How to Multiply? A = B x1 U(1)x2 U(2)x3 U(3)…xn U(n)… B xn U(n) : U(n) * B(n) I2 Tensor Flattening X1 X2 I1 B A =B x1U(1) x2 U(2) Y1 Y2

Tensor Flattening

Example A(1) = A = 2 4 1 1 2 2 2 4 1 -1 2 2 -2 4 1 2 2 2 4 4 -1 -2 1 2 1 2 2 4

Face in Multilinear Model Data Tensor

In data reduction, we use PCA as Y = eTX Mathematically… ? Data Tensor Left Singular of SVD In data reduction, we use PCA as Y = eTX SVD => A = USVT AAT = USVT(USVT)T = USVT * ((VT)TSUT) = US2UT PCA => Find Cov(A) = (AAT)/(n-1) AAT = eDeT => U = e

SVD for Multilinear Model To find Un, perform SVD on mode n space of the data tensor, i.e., J(n) This is not optimal, however, and they use ALS, or Alternating Least Square A lot of SIAM papers address this topic, and out of our scope

Mathematically… Again

Multilinear Face Model Bilinear Model (3-mode) 30K vertices x 10 expression x 15 identities Trilinear Model (4-mode) + 5 visemes Multilinear model of face geometry

Arbitrary Interpolation Synthesized Data, f 1 = n Original Data, M m Weighting, w 1 x m rows data f = M x2 w(2)

Interpolation in Multilinear Model F = M x2 w(2) Multilinear model of face geometry f = M x2 w(2) x3 w(3) x4 w(4) …. xN w(N)

Probability Principle Component Analysis (PPCA) Missing Data So far, we dealt with perfect data set In practice… NOT the case Maximum A Posteriori (MAP) estimation failed Probability Principle Component Analysis (PPCA)

t = Wx +μ +ε Short Review on PPCA x is N(0, I) , εis isotropic error N(0, σ2I) So t is N(μ, WWT + σ2I) Given t, we want to estimate W, σ Maximize the likelihood function L = p(t) = Πip(ti |x,W)

Short Review on PPCA σML = 1/(d-q) Σj = q+1 to dλj Maximum Likelihood Estimators (M.L.E) tells us that, by taking log-likelihood WML = Uq(Λq – σ2I)1/2R σML = 1/(d-q) Σj = q+1 to dλj Uq is eigen-vector and Λq eigen-value ------------------------------------------------------- End of review

Probabilistic Face Model t = Wx + μ +ε Likelihood Function p(t |x,W)

=> Missing Data Jj = mode-j of Tj = mode-j of J => Jj = mode-j of Mx2U(2)…xj-1U(j-1) xj+1U(j+1) ..xn U(n)

Face Tracking Kanade-Lucas-Tomasi (KLT) algorithm Zd = Z(p – p0) = e Z(sR fi + t - po) = e for vertex i Z(sR Mm,iwm + t – p0) = e

Comparison

Result