Download presentation
Presentation is loading. Please wait.
Published byLeroy Arendale Modified over 10 years ago
1
2010 Winter School on Machine Learning and Vision Sponsored by Canadian Institute for Advanced Research and Microsoft Research India With additional support from Indian Institute of Science, Bangalore and The University of Toronto, Canada
2
Outline 1.Approximate inference: Mean field and variational methods 2.Learning generative models of images 3.Learning epitomes of images
3
Part A Approximate inference: Mean field and variational methods
4
Line processes for binary images (Geman and Geman 1984) 0 Function, Patterns with high 1 0 1 1 0 0 1 1 1 0 0 Patterns with low 0 1 0 1 0 0 0 1 1 1 0 1 0 1 1 1 0 Under P, lines are probable 1 0 0 1 1 0
5
Use tablet to derive variational inference method
6
Denoising images using line process models
7
Part B Learning Generative Models of Images Brendan Frey University of Toronto and Canadian Institute for Advanced Research
8
Generative models Generative models are trained to explain many different aspects of the input image –Using an objective function like log P(image), a generative model benefits by account for all pixels in the image Contrast to discriminative models trained in a supervised fashion (eg, object recognition) –Using an objective function like log P(class|image), a discriminative model benefits by accounting for pixel features that distinguish between classes
9
Rejecting a common criticism of generative models Critics of generative models often point to supposedly well-defined tasks (eg, object recognition) and claim that discriminatively trained classifiers (eg, SVMs) perform better My experience is that such critics usually study artificially simplistic tasks that are mostly unrelated to real-world tasks –Hand-written digit recognition –Identifying cows in PASCAL or Caltech256 images –Classifying gene function using microarray data
10
Accepting a common criticism of generative models Generative models do well at automatically answering diverse and even unexpected queries, but on specific supervised learning tasks, discriminative methods (eg, SVMs) usually perform better
11
What constitutes an image Uniform 2-D array of color pixels Uniform 2-D array of grey-scale pixels Non-uniform images (eg, retinal images, compressed sampling images) Features extracted from the image (eg, SIFT features) Subsets of image pixels selected by the model (must be careful to represent universe) …
12
What constitutes a generative model?
13
Learning Bayesian Networks: Exact and approximate methods
14
Maximum likelihood learning when all variables are visible (complete data) Suppose we observe N IID training cases v (1) …v (N) Let be the parameters of a model P(v) Maximum likelihood estimate of : ML = argmax n P(v (n) | ) = argmax log( n P(v (n) | ) ) = argmax n log P(v (n) | )
15
Complete data in Bayes nets All variables are observed, so P(v| ) = i P(v i |pa i, i ) where pa i = parents of v i, i parameterizes P(v i |pa i ) Since argmax () = argmax log (), i ML = argmax i n log P(v (n) | ) = argmax i n i log P(v i (n) |pa i (n), i ) = argmax i n log P(v i (n) |pa i (n), i ) Each child-parent module can be learned separately
16
Example: Learning a Mixture of Gaussians from labeled data Recall: For cluster k, the probability density of x is The probability of cluster k is p(z k = 1) = k Complete data: Each training case is a (z n,x n ) pair, let N k be the number of cases in class k ML estimation:, That is, just learn one Gaussian for each class of data
17
Example: Learning from complete data, a continuous child with continuous parents Estimation becomes a regression-type problem Eg, linear Gaussian model: P(v i |pa i, i ) = N (v i ; w i0 + n:o n pa i w in v n,C i ), mean = linear function of parents Estimation: Linear regression
18
Learning fully-observed MRFs It turns out we can NOT directly estimate each potential using only observations of its variables P(v| ) = i ϕ (v C i | i ) / ( v i ϕ (v C i | i )) Problem: The partition function (denominator)
19
Learning Bayesian networks when there is missing data
20
Example: Mixture of K unit-variance Gaussians P(x) = k k exp(-(x- 1 ) 2 /2), where = (2 ) -1/2 The log-likelihood to be maximized is log ( k k exp(-(x- 1 ) 2 /2) ) The parameters { k, k } that maximize this do not have a simple, closed form solution One approach: Use nonlinear optimizer This approach is intractable if the number of components is too large A different approach…
21
The expectation maximization (EM) algorithm (Dempster, Laird and Rubin 1977) Learning was more straightforward when the data was complete Can we use probabilistic inference (compute P(h|v, )) to fill in the missing data and then use the learning rules for complete data? YES: This is called the EM algorithm
22
Initialize (randomly or cleverly) E-Step: Compute Q (n) (h) = P(h|v (n), ) for hidden variables h, given visible variables v M-Step: Holding Q (n) (h) constant, maximize n h Q (n) (h) log P(v (n),h| ) wrt Repeat E and M steps until convergence Each iteration increases log P(v| ) = n log ( h P(v,h| ) ) Expectation maximization (EM) algorithm Ensemble completion
23
Recall P(v,h| ) = i P(x i |pa i, i ), x = (v,h) Then, maximizing n h Q (n) (h) log P(v (n),h| ) wrt i becomes equivalent to maximizing, for each x i, n x i,pa i Q (n) (x i,pa i ) log P(x i |pa i, i ) where Q(..., x k =x k *,…)=0 if x k is observed to be x k * GIVEN the Q-distributions, the conditional P- distributions can be updated independently EM in Bayesian networks
24
E-Step: Compute Q (n) (x i,pa i ) = P(x i,pa i |v (n), ) for each variable x i M-Step: For each x i, maximize n x i,pa i Q (n) (x i,pa i ) log P(x i |pa i, i ) wrt i EM in Bayesian networks
25
EM for a mixture of Gaussians Initialization: Pick s, s, s randomly but validly E Step: For each training case, we need q(z) = p(z|x) = p(x|z)p(z) / ( z p(x|z)p(z)) Defining = q(z nk =1 ), we need to actually compute: M Step: Do it in the log-domain! Recall: For labeled data, (z nk )=z nk
26
EM for mixture of Gaussians: E step c z 1 = 0.5, 2 = 0.5, Images from data set
27
c =1 Images from data set z=z= c =2 P(c|z) c 0.52 0.48 1 = 0.5, 2 = 0.5, EM for mixture of Gaussians: E step
28
Images from data set z=z= c c =1 c =2 P(c|z) 0.51 0.49 1 = 0.5, 2 = 0.5, EM for mixture of Gaussians: E step
29
Images from data set z=z= c c =1 c =2 P(c|z) 0.48 0.52 1 = 0.5, 2 = 0.5, EM for mixture of Gaussians: E step
30
Images from data set z=z= c c =1 c =2 P(c|z) 0.43 0.57 1 = 0.5, 2 = 0.5, EM for mixture of Gaussians: E step
31
c 1 = 0.5, 2 = 0.5, z Set 1 to the average of zP(c =1 |z) Set 2 to the average of zP(c =2 |z) EM for mixture of Gaussians: M step
32
c 1 = 0.5, 2 = 0.5, z Set 1 to the average of zP(c =1 |z) Set 2 to the average of zP(c =2 |z) EM for mixture of Gaussians: M step
33
c 1 = 0.5, 2 = 0.5, z Set 1 to the average of diag((z- 1 ) T (z- 1 ))P(c =1 |z) Set 2 to the average of diag((z- 2 ) T (z- 2 ))P(c =2 |z) EM for mixture of Gaussians: M step
34
c 1 = 0.5, 2 = 0.5, z Set 1 to the average of diag((z- 1 ) T (z- 1 ))P(c =1 |z) Set 2 to the average of diag((z- 2 ) T (z- 2 ))P(c =2 |z) EM for mixture of Gaussians: M step
35
… after iterating to convergence: c z 1 = 0.6, 2 = 0.4,
36
Why does EM work?
37
Gibbs free energy Somehow, we need to move the log() function in the expression log( h P(h,v)) inside the summation to obtain log P(h,v), which simplifies We can do this using Jensens inequality: Free energy
38
Properties of free energy F - log P(v) The minimum of F w.r.t Q gives F = - log P(v) Q(h) = P(h|v) = -
39
Proof that EM maximizes log P(v) (Neal and Hinton 1993) E-Step: By setting Q(h)=P(h|v), we make the bound tight, so that F = - log P(v) M-Step: By maximizing h Q(h) logP(h,v) wrt the parameters of P, we are minimizing F wrt the parameters of P Since -log P new (v) F new F old = -log P old (v), we have log P new (v) log P old (v). = -
40
Generalized EM M-Step: Instead of minimizing F wrt P, just decrease F wrt P E-Step: Instead of minimizing F wrt Q (ie, by setting Q(h)=P(h|v)), just decrease F wrt Q Approximations –Variational techniques (which decrease F wrt Q) –Loopy belief propagation (note the phrase loopy) –Markov chain Monte Carlo (stochastic …) = -
41
Summary of learning Bayesian networks Observed variables decouple learning in different conditional PDFs In contrast, hidden variables couple learning in different conditional PDFs Learning models with hidden variables entails iteratively filling in hidden variables using exact or approximate probabilistic inference, and updating every child-parent conditional PDF
42
Back to… Learning Generative Models of Images Brendan Frey University of Toronto and Canadian Institute for Advanced Research
43
What constitutes an image Uniform 2-D array of color pixels Uniform 2-D array of grey-scale pixels Non-uniform images (eg, retinal images, compressed sampling images) Features extracted from the image (eg, SIFT features) Subsets of image pixels selected by the model (must be careful to represent universe) …
44
Model size 1 class 2 classes 3 classes 4 classes Experiment: Fitting a mixture of Gaussians to pixel vectors extracted from complicated images
45
Why didnt it work? Is there a bug in the software? –I dont think so, because the log-likelihood monotonically increases and the software works properly for toy data generated from a mixture of Gaussians Is there a mistake in our mathematical derivation? –The EM algorithm for a mixture of Gaussians has been studied by many people – I think the math is ok
46
Why didnt it work? Are we missing some important hidden variables? YES: The location of each object
47
x z T T=T= c c=1c=1 P(c) = c P(x|z,T) = N(x; Tz, ) diag( ) = 1 = 0.6, 2 = 0.4, Shift, T P(T) diag( ) = z=z= x=x= Diagonal Transformed mixtures of Gaussians (TMG) (Frey and Jojic, 1999-2001)
48
EM for TMG x T c z E step: Compute Q(T)=P(T|x), Q(c)=P(c|x), Q(c,z)=P(z,c|x) and Q(T,z)=P(z,T|x) for each x in data M step: Set – c = avg of Q(c) – T = avg of Q(T) – c = avg mean of z under Q(z|c) – c = avg variance of z under Q(z|c) – = avg var of x-Tz under Q(T,z)
49
Model size 1 class 2 classes 3 classes 4 classes Random initialization Experiment: Fitting transformed mixtures of Gaussians to complicated images
50
Lets peek into the Bayes net (different movie) E[z|x] P(c|x) argmax c P(c|x) argmax T P(T|x) E[Tz|x] x
51
tmgEM.m is available on the web
52
Accounting for multiple objects in the same image
53
How can we compose an image that includes multiple objects? In TMG, the foreground and background were distinguished by the noise map When there are multiple objects, we need a way to assign each pixel to one object diag( ) = 1 = 0.6, 2 = 0.4, diag( ) =
54
Layered 2.5-D representations Adelson and Anandan (1990) described image patches as a composition of 2-D layers: Image = mask x foreground picture + (1-mask) x background picture
55
= ( + ) + Example
56
A generative model for layered vision (Jojic and Frey 2001, Frey, Kannan and Jojic, 2003)
57
Movies
58
A generative model for layered vision (Jojic and Frey 2001, Frey, Kannan and Jojic, 2003)
59
Random variables Appearance and transparency of layer l:, Class of layer l: Contribution that layer l makes to the input: Transformation of layer l: Contribution of layer l, including transformations: Subspace coordinate, layer l: Image
60
Probability model Image model: Model of hidden variables: Joint pdf:
61
Efficient probabilistic reasoning and learning Approximate posterior using q-distribution: where
62
Optimizing q Minimize the variational free energy: Algorithm: –Initialize variational parameters –Select a variational parameter or a model parameter, and adjust it so as to minimize F –Repeat until convergence
63
Inference updates Q(c) update: Introduce auxiliary variables: Rewriting the image model:
64
T update: s update: m update: Inference updates
65
Learning updates Image variances: Other params:
67
Movies
68
Inferring leg motion
69
Accounting for local image features using epitomes A good way to model local image features is to factorize them (cf Brunos talk) A simpler method is to cluster them (Freeman and Pasztor, 1999) A generative model based on clustered image patches needs a way to account for how image patches are coordinated
70
Learning the epitome of an image (Jojic, Kannan and Frey, ICCV 2003)
71
Movies
72
An EM-type learning algorithm
73
Gaussian likelihood model Patch k Map from epitome to image Set of pixel indices in patch k Epitome (parameters)
74
EM for Gaussian likelihood model M Step E Step
75
Examples
77
mean variance
78
Why epitomes are interesting & useful Generative model of multi-sized patches Invariant to transformations (eg, affine) Organizes and compresses patch data Proximity of patches in epitome probability patches belong to same part Incomplete observations are stitched into a single model Can model patterns at a wide range of scales Searching an epitome is much faster than searching an equivalent library of patches
79
Applications Data compression Data summarization and user interface Denoising Parts-based image modeling Segmentation …
80
Application: Image editing
81
Application: Denoising Original imageNoisy image Denoising using mixture of Gaussians Denoising using epitome SNR=13dBSNR=18.4dBSNR=19.2dB Wiener filter: 16.1dB Comparison: 80x80 epitome vs mixture of 1000 Gaussians Epitome learning is 10 times faster per iteration of EM Epitome achieves a higher SNR and a better image
82
Using epitomes to model images with multiple parts EmEm EsEs S1S1 S2S2 M X X=M*S 1 +(1-M)*S 2 + noise appearance epitome shape epitome
83
Application: Segmenting parts (Anitha Kannan) emem eses S1S1 s2s2 M x Epitome
84
Joint distribution and inference Exact inference is intractable, so we minimize the free energy using a variational method: Variational approximation to posterior
85
Parameterizing the approximation to the posterior Delta Dirac Estimated value for all patches containing i Idea: We should fit the relaxed model under the constraint that the generative patches agree in the overlapping regions
86
The bound simplifies Posterior Bound For this q function, the bound simplifies into a sum of quadratic terms than can be efficiently optimized by a generalized EM
87
Variational Inference Iterate:
88
Application: Segmenting parts
89
Another example
90
A tough synthetic example Synthetic tough test case – gray level image with the same mean and variance in the two regions
91
Video epitomes (Cheung, Frey, Jojic ICCV 2005)
92
Examples of video epitomes Temporally compressed epitome Spatially compressed epitome
93
Current work Replace sprites and masks with hierarchical patch-based models of local image and shape features Learn models from millions of images, taken from videos
94
Summary Generative models are trained to explain many different aspects of the input image –Using an objective function like log P(image), a generative model benefits by account for all pixels in the image Contrast to discriminative models trained in a supervised fashion (eg, object recognition) –Using an objective function like log P(class|image), a discriminative model benefits by accounting for pixel features that distinguish between classes
95
Rejecting a common criticism of generative models Critics of generative models often point to well- defined tasks (eg, object recognition) and claim that discriminatively trained classifiers (eg, SVMs) perform better I would argue that these tasks are often not representative of real-world problems –Hand-written digit recognition –Identifying cows in PASCAL or Caltech256 images –Classifying gene function using microarray data
96
Accepting a common criticism of generative models On specific supervised learning tasks, discriminative methods (eg, SVMs) usually perform better
97
Current work Replace sprites and masks with hierarchical patch-based models of local image and shape features Learn models from millions of images, taken from videos
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.