Presentation is loading. Please wait.

Presentation is loading. Please wait.

MAD-Bayes: MAP-based Asymptotic Derivations from Bayes

Similar presentations


Presentation on theme: "MAD-Bayes: MAP-based Asymptotic Derivations from Bayes"— Presentation transcript:

1 MAD-Bayes: MAP-based Asymptotic Derivations from Bayes
Michael I. Jordan INRIA University of California, Berkeley May 11, 2013 Acknowledgments: Brian Kulis, Tamara Broderick

2 Statistical Inference and Big Data
Two major needs: models with open-ended complexity and scalable algorithms that allow those models to be fit to data

3 Statistical Inference and Big Data
Two major needs: models with open-ended complexity and scalable algorithms that allow those models to be fit to data In Bayesian inference the focus is on the models burgeoning literature on Bayesian nonparametrics provides stochastic processes for representing flexible data structures but the algorithmic choices are limited

4 Statistical Inference and Big Data
Two major needs: models with open-ended complexity and scalable algorithms that allow those models to be fit to data In Bayesian inference the focus is on the models burgeoning literature on Bayesian nonparametrics provides stochastic processes for representing flexible data structures but the algorithmic choices are limited So Big Data research hasn’t made much use of Bayes, and is instead optimization-based but the model choices tend to be limited

5 Bayesian Nonparametric Modeling
Examples of stochastic processes used in Bayesian nonparametrics include distributions on: directed trees of unbounded depth and unbounded fan-out partitions grammars Markov processes with unbounded state spaces infinite-dimensional matrices functions (smooth and non-smooth) copulae distributions Power laws arise naturally in these distributions Hierarchical modeling uses these stochastic processes as building blocks

6 The Optimization Perspective
Write down a loss function and a regularizer Find scalable algorithms that minimize the sum of these terms Prove something about these algorithms

7 The Optimization Perspective
Write down a loss function and a regularizer Find scalable algorithms that minimize the sum of these terms Prove something about these algorithms The connection to model-based inference is often in the analysis, but is sometimes in the design e.g., Bayesian ideas are sometimes used to inspire the design of the regularizer

8 The Optimization Perspective
Write down a loss function and a regularizer Find scalable algorithms that minimize the sum of these terms Prove something about these algorithms The connection to model-based inference is often in the analysis, but is sometimes in the design e.g., Bayesian ideas are sometimes used to inspire the design of the regularizer Where does the loss function come from?

9 The Optimization Perspective
Write down a loss function and a regularizer Find scalable algorithms that minimize the sum of these terms Prove something about these algorithms The connection to model-based inference is often in the analysis, but is sometimes in the design e.g., Bayesian ideas are sometimes used to inspire the design of the regularizer Where does the loss function come from? Gauss, Huber, Fisher, …

10 The Optimization Perspective
Write down a loss function and a regularizer Find scalable algorithms that minimize the sum of these terms Prove something about these algorithms The connection to model-based inference is often in the analysis, but is sometimes in the design e.g., Bayesian ideas are sometimes used to inspire the design of the regularizer Where does the loss function come from? Gauss, Huber, Fisher, … It’s all very parametric, and the transition to nonparametrics is a separate step

11 This Talk Bayesian nonparametrics meets optimization
flexible, scalable modeling framework gives rise to new loss functions and regularizers that are naturally nonparametric no recourse to MCMC, SMC, etc

12 This Talk Bayesian nonparametrics meets optimization
flexible, scalable modeling framework gives rise to new loss functions and regularizers that are naturally nonparametric no recourse to MCMC, SMC, etc Inspiration: the venerable, scalable K-means algorithm can be derived as the limit of an EM algorithm for fitting a mixture model

13 This Talk Bayesian nonparametrics meets optimization
flexible, scalable modeling framework gives rise to new loss functions and regularizers that are naturally nonparametric no recourse to MCMC, SMC, etc Inspiration: the venerable, scalable K-means algorithm can be derived as the limit of an EM algorithm for fitting a mixture model We do something similar in spirit, taking limits of various Bayesian nonparametric models: Dirichlet process mixtures hierarchical Dirichlet process mixtures beta processes and hierarchical beta processes

14 K-means Clustering Represent the data set in terms of K clusters, each of which is summarized by a prototype Each data is assigned to one of K clusters Represented by allocations such that for all data indices i we have Example: 4 data points and 3 clusters

15 K-means Clustering Cost function: the sum-of-squared distances from each data point to its assigned prototype: The K-means algorithm is coordinate descent on this cost function

16 Coordinate Descent Step 1: Fix values for and minimize w.r.t
assign each data point to the nearest prototype Step 2: Fix values for and minimize w.r.t this gives Iterate these two steps Convergence guaranteed since there are a finite number of possible settings for the allocations It can only find local minima, so we should start the algorithm with many different initial settings

17

18

19

20

21

22

23

24

25

26 From Gaussian Mixtures to K-means
A Gaussian mixture model: Set the mixing proportions to Write down the EM algorithm for fitting this model Take to zero and recover the K-means algorithm the E step of EM is Step 1 of K-means the M step of EM is Step 2 of K-means

27 The K in K-means What if K is not known?
a challenging model selection problem the algorithm itself is silent on the problem The Gaussian mixture model perspective brings the tools of Bayesian model selection to bear in principle, but not in the limit How about starting with Dirichlet process mixtures and Chinese restaurant processes instead of finite mixture models?

28 Chinese Restaurant Process (CRP)
A random process in which customers sit down in a Chinese restaurant with an infinite number of tables first customer sits at the first table th subsequent customer sits at a table drawn from the following distribution: where is the number of customers currently at table and where denotes the state of the restaurant after customers have been seated

29 The CRP and Clustering Data points are customers; tables are mixture components the CRP defines a prior distribution on the partitioning of the data and on the number of tables This prior can be completed with: a likelihood---e.g., associate a parameterized probability distribution with each table a prior for the parameters---the first customer to sit at table chooses the parameter vector, , for that table from a prior We want to write out all of these probabilities and then take a scale parameter to zero

30 CRP Prior, Gaussian Likelihood, Conjugate Prior

31 The CRP Prior Let denote a partition of the integers 1 through N, and let Then, under the CRP, we have: This function (the EPPF) is a function only of the cardinalities of the partition; this implies exchangeability

32 The Joint Probability Encode the partition with allocation variables
The joint probability of the allocations and the data is the product of the EPPF and the usual mixture model likelihood, where is now random I.e., we obtain a joint probability And we can find the MAP estimate of a clustering via:

33 Small Variance Asymptotics
Now let the likelihood be Gaussian and take the variance to zero We do this analytically by picking a rate constant and reparameterizing: And letting go to zero we get: I.e., a penalized form of the K-means objective

34 Coordinate Descent: DP Means
Reassign a point to the cluster corresponding to the closest mean, unless the closest cluster has squared Euclidean distance greater than . In this case, start a new cluster. Given the cluster assignments, perform Gibbs moves on all the means, which amounts to sampling from the posterior based on and all observations in a cluster.

35 The CRP and Exchangeability
The CRP is a distribution on partitions; it is an exchangeable distribution on partitions By De Finetti’s theorem, there must exist an underlying random measure such that the CRP is obtained by integrating out that random measure That random measure turns out to be the Dirichlet process (e.g., Blackwell & MacQueen, 1972)

36 The De Finetti Theorem An infinite sequence of random variables
is called infinitely exchangeable if the distribution of any finite subsequence is invariant to permutation Theorem: infinite exchangeability if and only if for some random measure

37 Random Measures and Their Marginals
The De Finetti random measure is known for a number of interesting combinatorial stochastic processes: Dirichlet process => Chinese restaurant process (Polya urn) Beta process => Indian buffet process Hierarchical Dirichlet process => Chinese restaurant franchise HDP-HMM => infinite HMM Nested Dirichlet process => nested Chinese restaurant process

38 Completely Random Measures
(Kingman, 1967) Completely random measures are measures on a set that assign independent mass to nonintersecting subsets of e.g., Poisson processes, gamma processes, beta processes, compound Poisson processes and limits thereof The Dirichlet process is not a completely random measure but it's a normalized gamma process Completely random processes are discrete wp1 (up to a possible deterministic continuous component) Completely random measures are random measures, not necessarily random probability measures

39 Completely Random Measures
(Kingman, 1967) x Assigns independent mass to nonintersecting subsets of

40 Completely Random Measures
(Kingman, 1967) Consider a Poisson random measure on with rate function specified as a product measure Sample from this Poisson process and connect the samples vertically to their coordinates in x

41 Gamma Process The gamma process is a CRM for which the rate function is given as follows (on ): Draw a sample from a Poisson random measure with this rate measure And the resulting random measure can be written simply as:

42 Dirichlet Process The Dirichlet process is a normalized gamma process
x

43 Dirichlet Process The Dirichlet process is a normalized gamma process
x x x x x x x x x x x x x x x

44 Dirichlet Process Marginals
Consider the following hierarchy: The variables are clearly exchangeable The partition structure that they induce is exactly the Chinese restaurant process

45 Dirichlet Process Mixture Models

46 Multiple Estimation Problems
We often face multiple, related estimation problems E.g., multiple Gaussian means: Maximum likelihood: Maximum likelihood often doesn't work very well want to “share statistical strength”

47 Hierarchical Bayesian Approach
The Bayesian or empirical Bayesian solution is to view the parameters as random variables, related via an underlying variable Given this overall model, posterior inference yields shrinkage---the posterior mean for each combines data from all of the groups

48 Hierarchical Modeling
The plate notation: Equivalent to:

49 Hierarchical Dirichlet Process Mixtures

50 Application: Protein Modeling
A protein is a folded chain of amino acids The backbone of the chain has two degrees of freedom per amino acid (phi and psi angles) Empirical plots of phi and psi angles are called Ramachandran diagrams

51 Application: Protein Modeling
We want to model the density in the Ramachandran diagram to provide an energy term for protein folding algorithms We actually have a linked set of Ramachandran diagrams, one for each amino acid neighborhood We thus have a linked set of density estimation problems

52 Protein Folding (cont.)
We have a linked set of Ramachandran diagrams, one for each amino acid neighborhood

53 Protein Folding (cont.)

54 Chinese Restaurant Franchise (CRF)
Global menu: Restaurant 1: Restaurant 2: Restaurant 3:

55 Small Variance Asymptotics
Define group-specific allocation variables for groups As before, let the variance in the likelihood go to zero; the resulting posterior is asymptotically: Leads to a simple coordinate descent algorithm with two levels of clustering decisions

56 Dirichlet Processes vs. Beta Processes
The Dirichlet process yields a classification of each data point into a single class (a “cluster”) The beta process allows each data point to belong to multiple classes (a “feature vector”) Essentially, the Dirichlet process yields a single coin with an infinite number of sides Essentially, the beta process yields an infinite collection of coins with mostly small probabilities, and the Bernoulli process tosses those coins to yield a binary feature vector

57 Beta Processes For the beta process, the rate function is given as follows (on the space ): And the resulting random measure can be written simply as: degenerate Beta(0,c) distribution Base measure

58 Beta Processes

59 Beta Process and Bernoulli Process

60 Indian Buffet Process (IBP)
(Griffiths & Ghahramani, 2002) Indian restaurant with infinitely many dishes in a buffet line Customers through enter the restaurant the first customer samples dishes the th customer samples a previously sampled dish with probability then samples new dishes

61 Beta Process Marginals
(Thibaux & Jordan, 2007) Theorem: The beta process is the De Finetti mixing measure underlying the Indian buffet process (IBP)

62 Hierarchical Beta Processes
A hierarchical beta process is a beta process whose base measure is itself random and drawn from a beta process

63 Towards the BP-Means Algorithm
Let denote the Indian buffet matrix We again need to compute And perform small-variance asymptotics on To obtain a cost function to which we can apply coordinate descent

64 The Exchangeable Feature Probability Function (EFPF)
How do we compute ? Broderick, Jordan and Pitman (2013) develop a general approach to computing such exchangeable feature probability functions (EFPFs) Applied to the Indian buffet process, it yields:

65 The Likelihood What about ? Many possible choices; a popular one is:

66 Small-Variance Asymptotics
Now take to zero; this yields This yields a coordinate descent algorithm that we refer to as “BP-Means”


Download ppt "MAD-Bayes: MAP-based Asymptotic Derivations from Bayes"

Similar presentations


Ads by Google