Download presentation
Presentation is loading. Please wait.
1
Face Transfer with Multilinear Models
Daniel Vlasic & Jovan Popovic CSAIL MIT Matthew Brand & Hanspeter Pfister MERL
2
Outline Introduction to Multilinear Model Multilinear Face Model
Face Transfer
3
A = (U(2)x(U(1)xB)T)T A =B x1U(1) x2 U(2) A B = x AT x = I2 X1 X2
Y1 Y2 Y1-Y2 Y1 Y2 X1-Y1 X2-Y2 (X1-Y1) – (X2-Y2) X1 X2 1 X1 X2 Y1 Y2 = 1 x Y1 Y2 1 -1 X1-Y1 X2-Y2 U(1) A = (U(2)x(U(1)xB)T)T 1 X1 Y1 X1-Y1 AT 1 x = A =B x1U(1) x2 U(2) X2 Y2 X2-Y2 1 -1 U(2)
4
Linear Model J2 U(2) I1 J2 A I1 U(1) I2 J1 = B I1 J1
5
A = B x1U(1)x2U(2)x3U(3)…xnU(n)…xNU(N)
Multilinear Model Generalization of linear model A = B x1U(1)x2U(2)x3U(3)…xnU(n)…xNU(N) Orthogonal Transformation Data Tensor Core Tensor
6
A = B x1 U(1)x2 U(2)x3 U(3)…xn U(n)… B xn U(n) : U(n) * B(n)
How to Multiply? A = B x1 U(1)x2 U(2)x3 U(3)…xn U(n)… B xn U(n) : U(n) * B(n) I2 Tensor Flattening X1 X2 I1 B A =B x1U(1) x2 U(2) Y1 Y2
7
Tensor Flattening
8
Example A(1) = A = 2 4 1 1 2 2 2 4 1 -1 2 2 -2 4 1 2 2 2 4 4 -1 -2 1 2 1 2 2 4
9
Face in Multilinear Model
Data Tensor
10
In data reduction, we use PCA as Y = eTX
Mathematically… ? Data Tensor Left Singular of SVD In data reduction, we use PCA as Y = eTX SVD => A = USVT AAT = USVT(USVT)T = USVT * ((VT)TSUT) = US2UT PCA => Find Cov(A) = (AAT)/(n-1) AAT = eDeT => U = e
11
SVD for Multilinear Model
To find Un, perform SVD on mode n space of the data tensor, i.e., J(n) This is not optimal, however, and they use ALS, or Alternating Least Square A lot of SIAM papers address this topic, and out of our scope
12
Mathematically… Again
13
Multilinear Face Model
Bilinear Model (3-mode) 30K vertices x 10 expression x 15 identities Trilinear Model (4-mode) + 5 visemes Multilinear model of face geometry
14
Arbitrary Interpolation
Synthesized Data, f 1 = n Original Data, M m Weighting, w 1 x m rows data f = M x2 w(2)
15
Interpolation in Multilinear Model
F = M x2 w(2) Multilinear model of face geometry f = M x2 w(2) x3 w(3) x4 w(4) …. xN w(N)
16
Probability Principle Component Analysis (PPCA)
Missing Data So far, we dealt with perfect data set In practice… NOT the case Maximum A Posteriori (MAP) estimation failed Probability Principle Component Analysis (PPCA)
17
t = Wx +μ +ε Short Review on PPCA
x is N(0, I) , εis isotropic error N(0, σ2I) So t is N(μ, WWT + σ2I) Given t, we want to estimate W, σ Maximize the likelihood function L = p(t) = Πip(ti |x,W)
18
Short Review on PPCA σML = 1/(d-q) Σj = q+1 to dλj
Maximum Likelihood Estimators (M.L.E) tells us that, by taking log-likelihood WML = Uq(Λq – σ2I)1/2R σML = 1/(d-q) Σj = q+1 to dλj Uq is eigen-vector and Λq eigen-value End of review
19
Probabilistic Face Model
t = Wx + μ +ε Likelihood Function p(t |x,W)
20
=> Missing Data Jj = mode-j of
Tj = mode-j of J => Jj = mode-j of Mx2U(2)…xj-1U(j-1) xj+1U(j+1) ..xn U(n)
21
Face Tracking Kanade-Lucas-Tomasi (KLT) algorithm Zd = Z(p – p0) = e
Z(sR fi + t - po) = e for vertex i Z(sR Mm,iwm + t – p0) = e
22
Comparison
23
Result
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.