Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hidden Topic Markov Models Amit Gruber, Michal Rosen-Zvi and Yair Weiss in AISTATS 2007 Discussion led by Chunping Wang ECE, Duke University March 2, 2009.

Similar presentations


Presentation on theme: "Hidden Topic Markov Models Amit Gruber, Michal Rosen-Zvi and Yair Weiss in AISTATS 2007 Discussion led by Chunping Wang ECE, Duke University March 2, 2009."— Presentation transcript:

1 Hidden Topic Markov Models Amit Gruber, Michal Rosen-Zvi and Yair Weiss in AISTATS 2007 Discussion led by Chunping Wang ECE, Duke University March 2, 2009

2 Outline Motivations Related Topic Models Hidden Topic Markov Models Inference Experiments Conclusions

3 Motivations Feature Reduction Extensively large text corpora a small number of variables Topical segmentation Segment a document according to hidden topics Word sense disambiguation Distinguish between different instances of the same word according to the context

4 Related Topic Models LDA (JMLR 2003) 1. For, draw 2. For, (a) Draw (b) For, draw (c) For, draw Words in a document are exchangeable; documents are also exchangeable.

5 Related Topic Models Dynamic Topic Models (ICML 2006) Words in a document are exchangeable; documents are not exchangeable.

6 Related Topic Models Topic Modeling: Beyond Bag of Words (ICML 2006) Words in a document are not exchangeable; documents are exchangeable.

7 Related Topic Models Integrating Topics and Syntax (NIPS 2005) Words in a document are not exchangeable; documents are exchangeable. HMM LDASemantic words Non-semantic (syntactic) words

8 Hidden Topic Markov Models No topic transition is allowed within a sentence. Whenever a new sentence starts, either the old topic is kept or a new topic is drawn according to.

9 Hidden Topic Markov Models Transition matrices within a sentence or no transition between two sentences, with probability Transition occurs between two sentences, with probability Emission matrix Initial state distribution

10 Inference EM algorithm: E-step Compute using the forward- backward algorithm; M-step

11 Experiments NIPS dataset (1740 documents, 1557 for training, 183 for testing) –Data preprocess Extract words in the vocabulary (J=12113, no stop words); Divide text to sentences according to “.?!; ”. –Compare LDA, HTMM and VHTMM1 in terms of perplexity VHTMM1: a variant of HTMM with, a “ bag of sentences ” N test : the total length of the test document; N: the first N words of the document are observed. Average N test =1300

12 Experiments K=100 N=10 The lower the perplexity is, the better the model is in predicting unseen words.

13 Experiments –Topical segmentation HTMM LDA

14 Experiments –Top words of topics HTMM LDA mathacknowledgments reference

15 Experiments As more topics are available, the topics become more specific and topic transitions are more frequent.

16 Experiments Two toy datasets, generated using HTMM and LDA. Goal: to eliminate the option that the perplexity of HTMM might be lower than the perplexity of LDA only because it has less degrees of freedom. With toy datasets, other criteria can be used for comparison.

17 Conclusions HTMM is another extension of LDA, which relaxes the “ bag-of-words ” assumption by modeling the topic dynamics with a Markov chain. This extension leads to a significant improvement in perplexity, and makes additional inferences possible, such as topical segmentation and word sense disambiguation. It requires a larger storage since the entire document has to be the input of the algorithm. It only applies to structured data, where sentences are well defined.


Download ppt "Hidden Topic Markov Models Amit Gruber, Michal Rosen-Zvi and Yair Weiss in AISTATS 2007 Discussion led by Chunping Wang ECE, Duke University March 2, 2009."

Similar presentations


Ads by Google