Separating Style and Content with Bilinear Models Joshua B

Slides:



Advertisements
Similar presentations
Bilinear models for action and identity recognition Oxford Brookes Vision Group 26/01/2009 Fabio Cuzzolin.
Advertisements

FEATURE PERFORMANCE COMPARISON FEATURE PERFORMANCE COMPARISON y SC is a training set of k-dimensional observations with labels S and C b C is a parameter.
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Computer vision: models, learning and inference Chapter 18 Models for style and identity.
PCA + SVD.
Segmentation and Fitting Using Probabilistic Methods
Generative Models for Image Analysis Stuart Geman (with E. Borenstein, L.-B. Chang, W. Zhang)
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Maths for Computer Graphics
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Quaternion Colour Constancy
Greg GrudicIntro AI1 Introduction to Artificial Intelligence CSCI 3202 Fall 2007 Introduction to Classification Greg Grudic.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
NOTES ON MULTIPLE REGRESSION USING MATRICES  Multiple Regression Tony E. Smith ESE 502: Spatial Data Analysis  Matrix Formulation of Regression  Applications.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Style/Content separation Evgeniy Bart, Dan Levi April 13, 2003.
Independent Component Analysis (ICA) and Factor Analysis (FA)
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Need volunteers… From Monday’s paper: A simple story about representations Input signal: a moving edge. Model it using an auto-regressive model, Using.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
CSE 185 Introduction to Computer Vision Face Recognition.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Cluster Analysis Potyó László. Cluster: a collection of data objects Similar to one another within the same cluster Similar to one another within the.
Separating Style and Content with Bilinear Models Joshua B. Tenenbaum, William T. Freeman Computer Examples Barun Singh 25 Feb, 2002.
Lecture 2: Statistical learning primer for biologists
Probability and Statistics in Vision. Probability Objects not all the sameObjects not all the same – Many possible shapes for people, cars, … – Skin has.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Feature Selection and Dimensionality Reduction. “Curse of dimensionality” – The higher the dimensionality of the data, the more data is needed to learn.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
Irena Váňová. B A1A1. A2A2. A3A3. repeat until no sample is misclassified … labels of classes Perceptron algorithm for i=1...N if then end * * * * *
Feature Extraction 主講人:虞台文.
Chapter 1 Section 1.6 Algebraic Properties of Matrix Operations.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Clustering MacKay - Chapter 20.
University of Ioannina
Mixtures of Gaussians and the EM Algorithm
René Vidal and Xiaodong Fan Center for Imaging Science
Classification of unlabeled data:
Lecture 8:Eigenfaces and Shared Features
Outline Multilinear Analysis
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
By: Kevin Yu Ph.D. in Computer Engineering
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Bilinear Classifiers for Visual Recognition
CSCI N207 Data Analysis Using Spreadsheet
Recitation: SVD and dimensionality reduction
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Dimensionality Reduction
CS4670: Intro to Computer Vision
Separating Style and Content with Bilinear Models Joshua B
Lecture 13: Singular Value Decomposition (SVD)
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
3.6 Multiply Matrices.
EM Algorithm and its Applications
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Separating Style and Content with Bilinear Models Joshua B Separating Style and Content with Bilinear Models Joshua B. Tenenbaum, William T. Freeman Computer Examples Barun Singh 25 Feb, 2002

PHILOSOPHY & REPRESENTATION Data contains two components: style and content Want to represent them separately Symmetric Bilinear Model: y : observed data a : style vector b : content vector I, j : components of style and content W : matrix of basis vectors (e.g., “eigenfaces”) Y : (SK) x C A : (SK) x J b : J x C Asymmetric Bilinear Model: A : matrix of style-specific basis vectors More flexible model Easier to deal with

PROBLEMS TO BE SOLVED Given a labeled training set of observations in multiple styles and content classes, extrapolate a new style to unobserved content classes Fit asymmetric model (find A and b for known styles and contents) using SVD Find style matrix that best explains data for incomplete style (i.e., minimizes E given below) Extrapolate using the estimated style matrix OLC used to solve overfitting problem Parameters involved: l l = 0 : Purely asymmetric model l =  : Purely symmetric model

PROBLEMS TO BE SOLVED Given a labeled training set of observations in multiple styles and content classes, classify content observed in a new style Fit asymmetric model Use separable mixture model (SMM) with EM algorithm to determine style matrix for new style Parameters: model dimensionality J, model variance s2, max number of EM iterations tmax Select content class c that maximizes Pr(s’,c|y)

PROBLEMS TO BE SOLVED Given a labeled training set of observations in multiple styles and content classes, translate from new content observed only in new styles into known styles or content classes Fit symmetric model (find W, a, and b for known styles and contents) using iterated SVD procedure Given a single image in a new style and content type, iterate to find the style and content vectors for the new image (given an initial guess for the new content vector):

TOY EXAMPLE - intro Image made of 4 pixels, each of which are either white or red. Style represents if the top or bottom rows are red or white Content represents if the left or right columns are red or white. SYMMETRIC MODEL Basis Images ( W ) Content Vectors ( b ) Style Vectors ( a ) Output Images ( y )

TOY EXAMPLE - intro ASYMMETRIC MODEL Content Vectors ( b ) *Note: Images drawn as blocks, but represented as vectors, not matrices Content Vectors ( b ) Style-specific Basis Images ( A ) Output Images ( y )

? TOY EXAMPLE - extrapolation Fitting the asymmetric model Extrapolate Content Vectors ( b ) Style-specific Basis Images ( A ) ? Extrapolate

FONTS EXAMPLE - extrapolation Training Set Incomplete Style Content (Letter) Style (Font)

FONTS EXAMPLE - extrapolation Asymmetric Model 10 20 30 40 50 60 model dimension 10 20 30 40 50 60 model dimension Symmetric Model Sym. W/ Asym. Prior (dim = 60) vs. Actual

TOY EXAMPLE - classification 1: Fit asymmetric model to training set Content Vectors ( b ) Basis Images ( A ) Style-specific

TOY EXAMPLE - classification 2: Use Separable Mixture Model w/ EM to classify Content Vectors ( b ) Basis Images ( A ) Style-specific Actual Resulting Images s 2 = 0.5 s 2 = 0.6 s 2 = 0.35

FACES EXAMPLE - translation Content : faces Style: ligting

finito