Download presentation
Presentation is loading. Please wait.
1
Contrastive Divergence Learning
Geoffrey E. Hinton A discussion led by Oliver Woodford
2
Contents Maximum Likelihood learning Gradient descent based approach
Markov Chain Monte Carlo sampling Contrastive Divergence Further topics for discussion: Result biasing of Contrastive Divergence Product of Experts High-dimensional data considerations
3
Maximum Likelihood learning
Given: Probability model - model parameters - the partition function, defined as Training data Aim: Find that maximizes likelihood of training data: Or, that minimizes negative log of likelihood: p ( x ; ) = 1 Z f Toy example Known result: Z ( ) f ( x ; ) = e p 2 Z ( ) = R f x ; d = f ; g X = f x k g K 1 Z ( ) = p 2 p ( X ; ) = Q K k 1 Z f x E ( X ; ) = K l o g Z P k 1 f x
4
Maximum Likelihood learning
Method: at minimum Let’s assume that there is no linear solution… @ E ( X ; ) = @ E ( X ; ) = l o g Z 1 K i f x À is the expectation of given the data distribution . h i X @ E ( X ; ) = l o g p 2 + x À h i 1 D 3
5
Gradient descent-based approach
Move a fixed step size, , in the direction of steepest gradient. (Not line search – see why later). This gives the following parameter update equation: t + 1 = @ E ( X ; ) l o g Z f x À
6
Gradient descent-based approach
Recall Sometimes this integral will be algebraically intractable. This means we can calculate neither nor (hence no line search). However, with some clever substitution… so where can be estimated numerically. Z ( ) = R f x ; d @ l o g Z ( ) E ( X ; ) @ l o g Z ( ) = 1 R f x ; d p D E t + 1 = D @ l o g f ( x ; ) E p X D @ l o g f ( x ; ) E p
7
Markov Chain Monte Carlo sampling
To estimate we must draw samples from Since is unknown, we cannot draw samples randomly from a cumulative distribution curve. Markov Chain Monte Carlo (MCMC) methods turn random samples into samples from a proposed distribution, without knowing Metropolis algorithm: Perturb samples e.g. Reject if Repeat cycle for all samples until stabilization of the distribution. Stabilization takes many cycles, and there is no accurate criteria for determining when it has occurred. D @ l o g f ( x ; ) E p p ( x ; ) Z ( ) Z ( ) x k = + r a n d ( s i z e ) p ( x k ; ) < r a n d 1 x k
8
Markov Chain Monte Carlo sampling
Let us use the training data, , as the starting point for our MCMC sampling. Our parameter update equation becomes: X Notation: training data, training data after cycles of MCMC, - samples from proposed distribution with parameters . n X 1 t + 1 = D @ l o g f ( x ; ) E X
9
Contrastive divergence
Let us make the number of MCMC cycles per iteration small, say even 1. Our parameter update equation is now: Intuition: 1 MCMC cycle is enough to move the data from the target distribution towards the proposed distribution, and so suggest which direction the proposed distribution should move to better model the training data. t + 1 = D @ l o g f ( x ; ) E X
10
Contrastive divergence bias
We assume: ML learning equivalent to minimizing , where (Kullback-Leibler divergence). CD attempts to minimize Usually , but can sometimes bias results. See “On Contrastive Divergence Learning”, Carreira-Perpinan & Hinton, AIStats 2005, for more details. @ E ( X ; ) D l o g f x 1 X j 1 P j Q = R p ( x ) l o g q d X j 1 @ ( X j 1 ) = D l o g f x ; E @ X 1 j
11
Product of Experts
12
Dimensionality issues
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.