Latent Dirichlet Analysis

Slides:



Advertisements
Similar presentations
A Tutorial on Learning with Bayesian Networks
Advertisements

Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
Information retrieval – LSI, pLSI and LDA
Statistical Topic Modeling part 1
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Latent Dirichlet Allocation a generative model for text
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Topic models for corpora and for graphs. Motivation Social graphs seem to have –some aspects of randomness small diameter, giant connected components,..
Topic Models in Text Processing IR Group Meeting Presented by Qiaozhu Mei.
1 Physical Fluctuomatics 5th and 6th Probabilistic information processing by Gaussian graphical model Kazuyuki Tanaka Graduate School of Information Sciences,
Latent Dirichlet Allocation (LDA) Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University and Arvind Ramanathan of Oak Ridge National.
Popularity-Aware Topic Model for Social Graphs Junghoo “John” Cho UCLA.
Finding Scientific topics August , Topic Modeling 1.A document as a probabilistic mixture of topics. 2.A topic as a probability distribution.
27. May Topic Models Nam Khanh Tran L3S Research Center.
Eric Xing © Eric CMU, Machine Learning Latent Aspect Models Eric Xing Lecture 14, August 15, 2010 Reading: see class homepage.
Integrating Topics and Syntax -Thomas L
A Model for Learning the Semantics of Pictures V. Lavrenko, R. Manmatha, J. Jeon Center for Intelligent Information Retrieval Computer Science Department,
Latent Dirichlet Allocation D. Blei, A. Ng, and M. Jordan. Journal of Machine Learning Research, 3: , January Jonathan Huang
Latent Semantic Indexing and Probabilistic (Bayesian) Information Retrieval.
Latent Dirichlet Allocation
Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
Sampling and estimation Petter Mostad
Techniques for Dimensionality Reduction
CS246 Latent Dirichlet Analysis. LSI  LSI uses SVD to find the best rank-K approximation  The result is difficult to interpret especially with negative.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Analysis of Social Media MLD , LTI William Cohen
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
CS498-EA Reasoning in AI Lecture #19 Professor: Eyal Amir Fall Semester 2011.
Topic Modeling for Short Texts with Auxiliary Word Embeddings
B. Freeman, Tomasz Malisiewicz, Tom Landauer and Peter Foltz,
MCMC Output & Metropolis-Hastings Algorithm Part I
The topic discovery models
Gibbs sampling.
Model Inference and Averaging
Classification of unlabeled data:
Jun Liu Department of Statistics Stanford University
CS 4/527: Artificial Intelligence
Latent Variables, Mixture Models and EM
Relevance Feedback Hongning Wang
CAP 5636 – Advanced Artificial Intelligence
Hidden Markov Models Part 2: Algorithms
Probabilistic Models with Latent Variables
Discrete Event Simulation - 4
KAIST CS LAB Oh Jong-Hoon
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Ch13 Empirical Methods.
Topic Modeling Nick Jordan.
Graduate School of Information Sciences, Tohoku University
Bayesian Inference for Mixture Language Models
CS 188: Artificial Intelligence
Gaussian Mixture Models And their training with the EM algorithm
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Topic models for corpora and for graphs
Michal Rosen-Zvi University of California, Irvine
Latent Dirichlet Allocation
CS246: Latent Dirichlet Analysis
Junghoo “John” Cho UCLA
Topic models for corpora and for graphs
Approximate Inference by Sampling
Topic Models in Text Processing
Parametric Methods Berlin Chen, 2005 References:
CS 430: Information Discovery
14.0 Linguistic Processing and Latent Topic Analysis
Introduction to Digital Speech Processing
Presentation transcript:

Latent Dirichlet Analysis CS246 Latent Dirichlet Analysis

LSI LSI uses SVD to find the best rank-K approximation The result is difficult to interpret especially with negative numbers Q: Can we develop a more interpretable method?

Theory of LDA (Model-based Approach) Develop a simplified model on how users write a document based on topics. Fit the model to the existing corpus and “reverse engineer” the topics used in a document Q: How do we write a document? A: (1) Pick the topic(s) (2) Start writing on the topic(s) with related terms

Two Probability Vectors For every document d, we assume that the user will first pick the topics to write about P(z|d) : probability to pick topic z when the user write each word in document d. Document-topic vector of d We also assume that every topic is associated with each term with certain probability P(w|z) : the probability of picking the term w when the user write on the topic z. Topic-term vector of z

Probabilistic Topic Model There exists T number of topics The topics-term vector for each topic is set before any document is written P(wj|zi) is set for every zi and wj Then for every document d, The user decides the topics to write on, i.e., P(zi|d) For each word in d The user selects a topic zi with probability P(zi|d) The user selects a word wj with probability P(wj|zi)

Probabilistic Document Model P(w|z) P(z|d) Topic 1 Topic 2 1.0 money1 bank1 loan1 bank1 money1 ... bank loan money DOC 1 0.5 money1 river2 bank1 stream2 bank2 ... DOC 2 river stream bank 1.0 river2 stream2 river2 bank2 stream2 ... DOC 3

Example: Calculating Probability z1 = {w1:0.8, w2:0.1, w3:0.1} z2 = {w1:0.1, w2:0.2, w3:0.7} d’s topics are {z1: 0.9, z2:0.1} d has three terms {w32, w11, w21}. Q: What is the probability that a user will write such a document?

Corpus Generation Probability T: # topics D: # documents M: # words per document Probability of generating the corpus C

Generative Model vs Inference (1) P(w|z) P(z|d) Topic 1 Topic 2 1.0 money1 bank1 loan1 bank1 money1 ... bank loan money DOC 1 0.5 money1 river2 bank1 stream2 bank2 ... DOC 2 river stream bank 1.0 river2 stream2 river2 bank2 stream2 ... DOC 3

Generative Model vs Inference (2) Topic 1 Topic 2 ? money? bank? loan? bank? money? ... ? DOC 1 ? money? river? bank? stream? bank? ... DOC 2 ? ? river? stream? river? bank? stream? ... DOC 3

Probabilistic Latent Semantic Index (pLSI) Basic Idea: We pick P(zj|di), P(wk|zj), and zij values to maximize the corpus generation probability Maximum-likelihood estimation (MLE) More discussion later on how to compute the P(zj|di), P(wk|zj), and zij values that maximize the probability

Problem of pLSI Q: 1M documents, 1000 topics, 1M words. 1000 words/doc. How much input data? How many variables do we have to estimate? Q: Too much freedom. How can we avoid overfitting problem? A: Adding constraints to reduce degree of freedom

Latent Dirichlet Analysis (LDA) When term probabilities are selected for each topic Topic-term probability vector, (P(w1|zj), …, P(wW|zj)), is sampled randomly from Dirichlet distribution When users select topics for a document Document-topic probability vector, (P(z1|d), …, P(zT|d)), is sampled randomly from Dirichlet distribution

What is Dirichlet Distribution? Multinomial distribution Given the probability pi of each event ei, what is the probability that each event ei occurs ⍺i times after n trial? We assume pi’s. The distribution assigns ⍺i’s probability. Dirichlet distribution “Inverse” of multinomial distribution: We assume ⍺i’s. The distribution assigns pi’s probability.

Dirichlet Distribution Q: Given ⍺1, ⍺2,…, ⍺k, what are the most likely p1, p2, pk values?

Normalized Probability Vector and Simplex Remember that and When (p1, …, pn) satisfies p1 + … + pn = 1, they are on a “simplex plane” (p1, p2, p3) and their 2-simplex plane

Effect of ⍺ values p1 p2 p3 p1 p2 p3

Effect of ⍺ values p1 p2 p3 p1 p2 p3

Effect of ⍺ values p1 p2 p3 p1 p2 p3

Effect of ⍺ values p1 p1 p3 p3 p2 p2

Minor Correction is not “standard” Dirichlet distribution. The “standard” Dirichlet Distribution formula: Used non-standard to make the connection to multinomial distribution clear From now on, we use the standard formula

Back to LDA Document Generation Model For each topic z Pick the word probability vector P(wj|z)’s by taking a random sample from Dir(β1,…, βW) For every document d The user decides its topic vector P(zi|d)’s by taking a random sample from Dir(⍺1,…, ⍺T) For each word in d The user selects a topic z with probability P(z|d) The user selects a word w with probability P(w|z) Once all is said and done, we have P(wj|z): topic-term vector for each topic P(zi|d): document-topic vector for each document Topic assignment to every word in each document

Symmetric Dirichlet Distribution In principle, we need to assume two vectors, (⍺1,…, ⍺T) and (β1 ,…, βW) as input parameters. In practice, we often assume all ⍺i’s are equal to ⍺ and all βi’s = β Use two scalar values ⍺ and β, not two vectors. Symmetric Dirichlet distribution Q: What is the implication of this assumption?

Effect of ⍺ value on Symmetric Dirichlet Q: What does it mean? How will the sampled document topic vectors change as ⍺ grows? Common choice: ⍺ = 50/T, b = 200/W p1 p1 p3 p3 p2 p2

Plate Notation a P(z|d) z b w P(w|z) M T N

LDA as Topic Inference Given a corpus d1: w11, w12, …, w1m … dN: wN1, wN2, …, wNm Find P(z|d), P(w|z), zij that are most “consistent” with the given corpus Q: How can we compute such P(z|d), P(w|z), zij? The best method so far is to use Monte Carlo method together with Gibbs sampling

Monte Carlo Method (1) Class of methods that compute a number through repeated random sampling of certain event(s). Q: How can we compute Pi?

Monte Carlo Method (2) Define the domain of possible events Generate the events randomly from the domain using a certain probability distribution Perform a deterministic computation using the events Aggregate the results of the individual computation into the final result Q: How can we take random samples from a particular distribution?

Gibbs Sampling Q: How can we take a random sample x from the distribution f(x)? Q: How can we take a random sample (x, y) from the distribution f(x, y)? Gibbs sampling Given current sample (x1, …, xn), pick an axis xi, and take a random sample of xi value assuming all other (x1, …, xn) values In practice, we iterative over xi’s sequentially

Markov-Chain Monte-Carlo Method (MCMC) Gibbs sampling is in the class of Markov Chain sampling Next sample depends only on the current sample Markov-Chain Monte-Carlo Method Generate random events using Markov-Chain sampling and apply Monte-Carlo method to compute the result

Applying MCMC to LDA Let us apply Monte Carlo method to estimate LDA parameters. Q: How can we map the LDA inference problem to random events? We first focus on identifying topics {zij} for each word {wij}. Event: Assignment of the topics {zij} to wij’s. The assignment should be done according to P({zij}|C) Q: How to sample according to P({zij}|C)? Q: Can we use Gibbs sampling? How will it work? Q: What is P(zij|{z-ij},C)?

nwt: how many times the word w has been assigned to the topic t ndt: how many words in the document d have been assigned to the topic t Q: What is the meaning of each term?

LDA with Gibbs Sampling For each word wij Assign to topic t with probability For the prior topic t of wij, decrease nwt and ndt by 1 For the new topic t of wij, increase nwt and ndt by 1 Repeat the process many times At least hundreds of times Once the process is over, we have zij for every wij nwt and ndt

Result of LDA (Latent Dirichlet Analysis) TASA corpus 37,000 text passages from educational materials collected by Touchstone Applied Science Associates Set T=300 (300 topics)

Inferred Topics

Word Topic Assignments

LDA Algorithm: Simulation Two topics: River, Money Five words: “river”, “stream”, “bank”, “money”, “loan” Generate 16 documents by randomly mixing the two topics and using the LDA model river stream bank money loan River 1/3 Money

Generated Documents and Initial Topic Assignment before Inference First 6 and the last 3 documents are purely from one topic. Others are mixture White dot: “River”. Black dot: “Money”

Topic Assignment After LDA Inference First 6 and the last 3 documents are purely from one topic. Others are mixture After 64 iterations

Inferred Topic-Term Matrix Model parameter Estimated parameter Not perfect, but very close especially given the small data size river stream bank money loan River 0.33 Money river stream bank money loan River 0.25 0.4 0.35 Money 0.32 0.29 0.39

X = SVD vs LDA Both perform the following decomposition SVD views this as matrix approximation LDA views this as probabilistic inference based on a generative model Each entry corresponds to “probability”: better interpretability topic term term topic X = doc doc

LDA as Soft Classfication Soft vs hard clustering/classification After LDA, every document is assigned to a small number of topics with some weights Documents are not assigned exclusively to a topic Soft clustering

Summary Probabilistic Topic Model Statistical parameter estimation Generative model of documents pLSI and overfitting LDA, MCMC, and probabilistic interpretation Statistical parameter estimation Multinomial distribution and Dirichlet distribution Monte Carlo method Gibbs sampling Markov-Chain class of sampling