Download presentation
Presentation is loading. Please wait.
Published byBrandon Caldwell Modified over 9 years ago
1
27. May 20141 Topic Models Nam Khanh Tran (ntran@L3S.de) L3S Research Center
2
Nam Khanh Tran2 Acknowledgements The slides are in part based on the following slides “Probabilistic Topic Models”, David M. Blei 2012 “Topic Models”, Claudia Wagner, 2010.....and the papers David M. Blei, Andrew Y. Ng, Michael I. Jordan: Latent Dirichlet Allocation. Journal of Machine Learning Research 2003 Steyvers and Griffiths, Probabilistic Topic Models, (2006). David M. Blei, John D. Lafferty, Dynamic Topic Models. Proceedings of the 23rd international conference on Machine learning
3
Nam Khanh Tran3 Outline Introduction Latent Dirichlet Allocation Overview The posterior distribution for LDA Gibbs sampling Beyond latent Dirichlet Allocation Demo
4
Nam Khanh Tran4 The problem with information As more information becomes available, it becomes more difficult to find and discover what we need We need new tools to help us organize, search, and understand these vast amounts of information
5
Nam Khanh Tran5 Topic modeling Topic modeling provides methods for automatically organizing, understanding, searching, and summarizing large electronic archives 1)Discover the hidden themes that pervade the collection 2)Annotate the documents according to those themes 3)Use annotations to organize, summarize, search, form predictions
6
Nam Khanh Tran6 Discover topics from a corpus
7
Nam Khanh Tran7 Model the evolution of topics over time
8
Nam Khanh Tran8 Model connections between topics
9
Nam Khanh Tran9 Image annotation
10
Latent Dirichlet Allocation
11
Nam Khanh Tran11 Latent Dirichlet Allocation Introduction to LDA The posterior distribution for LDA Gibbs sampling
12
Nam Khanh Tran12 Probabilistic modeling Treat data as observations that arise from a generative probabilistic process that includes variables For documents, the hidden variables reflect the thematic structure of the collection Infer the hidden structure using posterior inference What are the topics that describe this collections? Situate new data into the estimated model How does the query or new document fit into the estimated topic structure
13
Nam Khanh Tran13 Intuition behind LDA
14
Nam Khanh Tran14 Generative model
15
Nam Khanh Tran15 The posterior distribution
16
Topic Models Topic 1 Topic 2 3 latent variables: Word distribution per topic (word-topic-matrix) Topic distribution per doc (topic-doc-matrix) Topic word assignment (Steyvers, 2006)
17
Topic models Observed variables : Word-distribution per document 3 latent variables Topic distribution per document : P(z) = θ (d) Word distribution per topic: P(w, z) = φ (z) Word-Topic assignment: P(z|w) Training: Learn latent variables on trainings-collection of documents Test: Predict topic distribution θ (d) of an unseen document d
18
Latent Dirichlet Allocation (LDA) Advantage: We learn topic distribution of a corpus we can predict topic distribution of an unseen document of this corpus by observing its words Hyper-parameters α and β are corpus-level parameters are only sampled once P( w | z, φ (z) ) P( φ (z) | β ) number of documents number of words
19
Matrix Representation of LDA observed latent θ (d) φ (z)
20
Statistical Inference and Parameter Estimation Key problem: Compute posterior distribution of the hidden variables given a document Posterior distribution is intractable for exact inference (Blei, 2003) Latent Vars Observed Vars and Priors
21
Statistical Inference and Parameter Estimation How can we estimate posterior distribution of hidden variables given a corpus of trainings-documents? Direct (e.g. via expectation maximization, variational inference or expectation propagation algorithms) Indirect i.e. estimate the posterior distribution over z (i.e. P(z)) Gibbs sampling, a form of Markov chain Monte Carlo, is often used to estimate the posterior probability over a high-dimensional random variable z
22
Markov Chain Example Random var X refers to the weather X t is value of var X at time point t State space of X = {sunny, rain} Transition probability matrix: P(sunny|sunny) = 0.9 P(sunny|rain) = 0.1 P(rain|sunny) = 0.5 P(rain|rain) = 0.5 Today ist sunny. What will be the wheather tomorrow? The day after tomorrow? source: http://en.wikipedia.org/wiki/Examples_of_Markov_chains
23
Markov Chain Example With increasing number of days n predictions for the weather tend towards a “steady state vector” q. q is independent from initial conditions it must be unchanged when transformed by P. This makes q an eigenvector (with eigenvalue 1), and means it can be derived from P
24
Gibbs Sampling Generates a sequence of samples from the joint probability distribution of two or more random variables. Aim: compute posterior distribution over latent variable z Pre-request: we must know the conditional probability of z P( z i = j | z -i, w i, d i,. )
25
Gibbs Sampling for LDA Random start Iterative For each word we compute How dominant is a topic z in the doc d? How often was the topic z already used in doc d? How likely is a word for a topic z? How often was the word w already assigned to topic z?
26
Run Gibbs Sampling Example (1) topic1topic2 money32 bank36 Loan21 River22 Stream21 112 22 2 1 1 12 1 2 1 2 1 1 2 1 221 2 1 2 doc1doc2doc3 topic1444 topic2444 1.Random topic assignments 2.2 count-matrices: C WT Words per topic C DT Topics per document
27
Gibbs Sampling for LDA Probability that topic j is chosen for word w i, conditioned on all other assigned topics of words in this doc and all other observed vars. Count number of times a word token w i was assigned to a topic j across all docs Count number of times a topic j was already assigned to some word token in doc d i unnormalized! => divide the probability of assigning topic j to word wi by the sum over all topics T
28
Run Gibbs Sampling Start: assign each word token to a random topic C WT = Count number of times a word token wi was assigned to a topic j C DT = Count number of times a topic j was already assigned to some word token in doc di First Iteration: For each word token, the count matrices C WT and C DT are first decremented by one for the entries that correspond to the current topic assignment Then, a new topic is sampled from the current topic-distribution of a doc and the count matrices C WT and C DT are incremented with the new topic assignment. Each Gibbs sample consists the set of topic assignments to all N word tokens in the corpus, achieved by a single pass through all documents
29
Run Gibbs Sampling Start: assign each word token to a random topic C WT = Count number of times a word token wi was assigned to a topic j C DT = Count number of times a topic j was already assigned to some word token in doc di First Iteration: For each word token, the count matrices C WT and C DT are first decremented by one for the entries that correspond to the current topic assignment Then, a new topic is sampled from the current topic-distribution of a doc and the count matrices C WT and C DT are incremented with the new topic assignment. Each Gibbs sample consists the set of topic assignments to all N word tokens in the corpus, achieved by a single pass through all documents
30
Run Gibbs Sampling Example (2) topic1topic2 money32 bank36 Loan21 River22 Stream21 12 22 2 1 1 12 1 2 1 2 1 1 2 1 221 2 1 2 doc1doc2doc3 topic1444 topic2444 First Iteration: Decrement C DT and C WT for current topic j Sample new topic from the current topic- distribution of a doc 3 2 2 5 3
31
Run Gibbs Sampling Example (2) topic1topic2 money23 bank36 Loan21 River22 Stream21 12 22 2 1 1 12 1 2 1 2 1 1 2 1 221 2 1 2 doc1doc2doc3 topic1344 topic2544 First Iteration: Decrement C DT and C WT for current topic j Sample new topic from the current topic- distribution of a doc 2 4 2 5 56
32
Run Gibbs Sampling Example (3) α = 50/T = 25 and β = 0.01 “Bank” is assigned to Topic 2 How often were all other topics used in doc d i How often was topic j used in doc d i
33
Gibbs Sampling Parameter Estimation Gibbs sampling estimates posterior distribution of z. But we need word- distribution φ of each topic and topic-distribution θ of each document. num of times word wi was related with topic j num of times all other words were related with topic j num of times topic j was related with doc d num of times all other topics were related with doc d predictive distributions of sampling a new token of word i from topic j, predictive distributions of sampling a new token in document d from topic j
34
Example inference
35
Topics vs. words
36
Visualizing a document Use the posterior topic probabilities of each document and the posterior topic assignments to each word
37
Document similarity Two documents are similar if they assign similar probabilities to topics
38
Nam Khanh Tran38 Beyond Latent Dirichlet Allocation
39
Nam Khanh Tran39 Extending LDA LDA is a simple topic model It can be used to find topics that describe a corpus Each document exhibits multiple topics How can we build on this simple model of text?
40
Nam Khanh Tran40 Extending LDA LDA can be embedded in more complicated models, embodying further intuitions about the structure of the texts (e.g., account for syntax, authorship, dynamics, correlation, and other structure) The data generating distribution can be changed. We can apply mixed- membership assumptions to many kinds of data (e.g., models of images, social networks, music, computer code and other types) The posterior can be used in many ways (e.g., use inferences in IR, recommendation, similarity, visualization and other applications)
41
Nam Khanh Tran41 Dynamic topic models
42
Nam Khanh Tran42 Dynamic topic models
43
Nam Khanh Tran43 Dynamic topic models
44
Nam Khanh Tran44 Dynamic topic models
45
Nam Khanh Tran45 Long tail of data
46
Topic Modeling mittels LDA Topic Modeling mittels LDA Corpus Collection durch Suche Corpus Collection durch Suche Term Selection Finden charakteris- tischer Terme Term Selection Finden charakteris- tischer Terme Thema 1: team, kollegen, … Thema 2: prozess, planung, … Thema 3: schicht, nacharbeit,.. Thema 4: qualifizierung, lernen Topic Inference basierend auf dem gelernten Model Topic Inference basierend auf dem gelernten Model Thema 2 Thema 4 Topic cropping
47
Nam Khanh Tran47 Implementations of LDA There are many available implementations of topic modeling LDA-C : A C implementation of LDA Online LDA: A python package for LDA on massive data LDA in R: Package in R for many topic models Mallet: Java toolkit for statistical NLP
48
Nam Khanh Tran48 Demo
49
Nam Khanh Tran49 Discussion
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.