Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov 12 2009.

Similar presentations


Presentation on theme: "CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov 12 2009."— Presentation transcript:

1 CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov 12 2009

2 Sequence Labeling Problem Many NLP problems can viewed as sequence labeling. Each token in a sequence is assigned a label. Labels of tokens are dependent on the labels of other tokens in the sequence, particularly their neighbors (not i.i.d).

3 Part Of Speech (POS) Tagging Annotate each word in a sentence with a part- of-speech. Lowest level of syntactic analysis. Useful for subsequent syntactic parsing and word sense disambiguation. John saw the saw and decided to take it to the table. PN V Det N Con V Part V Pro Prep Det N

4 Bioinformatics Sequence labeling also valuable in labeling genetic sequences in genome analysis. extron intron  AGCTAACGTTCGATACGGATTACAGCCT

5 Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier PN

6 Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier V

7 Probabilistic Sequence Models Probabilistic sequence models allow integrating uncertainty over multiple, interdependent classifications and collectively determine the most likely global assignment. Two standard models  Hidden Markov Model (HMM)  Conditional Random Field (CRF)

8 Hidden Markov Model Probabilistic generative model for sequences. A finite state machine with probabilistic transitions and probabilistic generation of outputs from states. Assume an underlying set of states in which the model can be. Assume probabilistic transitions between states over time. Assume a probabilistic generation of tokens from states.

9 Sample HMM for POS PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25

10 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25

11 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John

12 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John

13 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit

14 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit

15 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit the

16 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit the

17 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit the apple

18 Sample HMM Generation PropNoun John Mary Alice Jerry Tom Noun cat dog car pen bed apple Det a the that a the a Verb bit ate saw played hit 0.95 0.05 0.9 gave 0.05 stop 0.5 0.1 0.8 0.1 0.25 John bit the apple

19 Three Useful HMM Tasks Observation likelihood: To classify sequences. Most likely state sequence: To tag each token in a sequence with a label. Maximum likelihood training: To train models to fit empirical training data.

20 Observation Likelihood Given a sequence of observations, O, and a model with a set of parameters, λ, what is the probability that this observation was generated by this model: P(O| λ ) ? Allows HMM to be used as a language model: A formal probabilistic model of a language that assigns a probability to each string saying how likely that string was to have been generated by the language. Useful for two tasks:  Sequence Classification  Most Likely Sequence

21 Sequence Classification Assume an HMM is available for each category (i.e. language). What is the most likely category for a given observation sequence, i.e. which category’s HMM is most likely to have generated it? Used in speech recognition to find most likely word model to have generate a given sound or phoneme sequence. AustinBoston ? ? P(O | Austin) > P(O | Boston) ? ah s t e n O

22 Most Likely State Sequence Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=Q 1,Q 2,…Q T, that generated this sequence from this model? Used for sequence labeling. John gave the dog an apple.

23 Observation Likelihood Efficient Solution Markov assumption: Probability of the current state only depends on the immediately previous state, not on any earlier history (via the transition probability distribution, A). Forward-Backward Algorithm: Uses dynamic programming to exploit this fact to efficiently compute observation likelihood in O(N 2 T) time. (N: # of words, T: # of tokens)

24 Maximum Likelihood Training Given an observation sequence, O, what set of parameters, λ, for a given model maximizes the probability that this data was generated from this model (P(O| λ ))? Only need to have an unannotated observation sequence (or set of sequences) generated from the model. In this sense, it is unsupervised.

25 Supervised HMM Training If training sequences are labeled (tagged) with the underlying state sequences that generated them, then the parameters, λ ={A,B,π} can all be estimated directly from counts accumulated from the labeled sequences (with appropriate smoothing). Supervised HMM Training John ate the apple A dog bit Mary Mary hit the dog John gave Mary the cat....... Training Sequences Det Noun PropNoun Verb

26 Generative vs. Discriminative Sequence Labeling Models HMMs are generative models and are not directly designed to maximize the performance of sequence labeling. They model the joint distribution P(O,Q). Conditional Random Field (CRF) is specifically designed and trained to maximize performance of sequence labeling. They model the conditional distribution P(Q | O)

27 Definition of CRF

28 An Example of CRF

29 Sequence Labeling Y2Y2 X1X1 X2X2 … XTXT HMM Linear-chain CRF Generative Discriminative Y1Y1 YTYT.. Y2Y2 X1X1 X2X2 … XTXT Y1Y1 YTYT

30 Conditional Distribution for Linear Chain CRF A typical form of CRF for sequence labeling is

31 CRF experimental Results Experimental results verify that they have superior accuracy on various sequence labeling tasks.  Part of Speech tagging  Noun phrase chunking  Named entity recognition … However, CRFs are much slower to train and do not scale as well to large amounts of training data.  Training for POS on full Penn Treebank (~1M words) currently takes “over a week.”

32 CRF Summary CRF is a discriminative approach to sequence labeling whereas HMMs are generative. Discriminative methods are usually more accurate since they are trained for a specific performance task. CRF also easily allows adding additional token features without making additional independence assumptions. Training time is increased since a complex optimization procedure is needed to fit supervised training data.


Download ppt "CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov 12 2009."

Similar presentations


Ads by Google