Hidden Markov Models IP notice: slides from Dan Jurafsky.

Slides:



Advertisements
Similar presentations
CPSC 422, Lecture 16Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 16 Feb, 11, 2015.
Advertisements

Part-of-speech tagging. Parts of Speech Perhaps starting with Aristotle in the West (384–322 BCE) the idea of having parts of speech lexical categories,
February 2007CSA3050: Tagging II1 CSA2050: Natural Language Processing Tagging 2 Rule-Based Tagging Stochastic Tagging Hidden Markov Models (HMMs) N-Grams.
Natural Language Processing Lecture 8—9/24/2013 Jim Martin.
CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 3: ASR: HMMs, Forward, Viterbi.
Hidden Markov Models IP notice: slides from Dan Jurafsky.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Ch 9. Markov Models 고려대학교 자연어처리연구실 한 경 수
Statistical NLP: Lecture 11
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Chapter 6: HIDDEN MARKOV AND MAXIMUM ENTROPY Heshaam Faili University of Tehran.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Model (HMM) Tagging  Using an HMM to do POS tagging  HMM is a special case of Bayesian inference.
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Albert Gatt Corpora and Statistical Methods Lecture 8.
INTRODUCTION TO Machine Learning 3rd Edition
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Part II. Statistical NLP Advanced Artificial Intelligence (Hidden) Markov Models Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme Most.
Part II. Statistical NLP Advanced Artificial Intelligence Hidden Markov Models Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme Most.
1 LSA 352 Summer 2007 LSA 352: Speech Recognition and Synthesis Dan Jurafsky Lecture 5: Intro to ASR+HMMs: Forward, Viterbi, Baum-Welch IP Notice:
1 CSC 594 Topics in AI – Applied Natural Language Processing Fall 2009/ Part-Of-Speech (POS) Tagging.
FSA and HMM LING 572 Fei Xia 1/5/06.
Learning Bit by Bit Hidden Markov Models. Weighted FSA weather The is outside
POS based on Jurafsky and Martin Ch. 8 Miriam Butt October 2003.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
POS Tagging HMM Taggers (continued). Today Walk through the guts of an HMM Tagger Address problems with HMM Taggers, specifically unknown words.
Doug Downey, adapted from Bryan Pardo,Northwestern University
Word classes and part of speech tagging Chapter 5.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
CS 224S / LINGUIST 281 Speech Recognition, Synthesis, and Dialogue
M ARKOV M ODELS & POS T AGGING Nazife Dimililer 23/10/2012.
Parts of Speech Sudeshna Sarkar 7 Aug 2008.
CS 4705 Hidden Markov Models Julia Hirschberg CS4705.
Natural Language Processing Lecture 8—2/5/2015 Susan W. Brown.
Lecture 6 POS Tagging Methods Topics Taggers Rule Based Taggers Probabilistic Taggers Transformation Based Taggers - Brill Supervised learning Readings:
1 LIN 6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 4: Part of Speech Tagging (II) - Introduction to Probability February.
Fall 2005 Lecture Notes #8 EECS 595 / LING 541 / SI 661 Natural Language Processing.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
Word classes and part of speech tagging Chapter 5.
Hidden Markov Models & POS Tagging Corpora and Statistical Methods Lecture 9.
CSA3202 Human Language Technology HMMs for POS Tagging.
中文信息处理 Chinese NLP Lecture 7.
CPSC 422, Lecture 15Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15 Oct, 14, 2015.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
Hidden Markov Models (HMMs) –probabilistic models for learning patterns in sequences (e.g. DNA, speech, weather, cards...) (2 nd order model)
Albert Gatt Corpora and Statistical Methods. Acknowledgement Some of the examples in this lecture are taken from a tutorial on HMMs by Wolgang Maass.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Speech and Language Processing SLP Chapter 5. 10/31/1 2 Speech and Language Processing - Jurafsky and Martin 2 Today  Parts of speech (POS)  Tagsets.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
CSC 594 Topics in AI – Natural Language Processing
CS 224S / LINGUIST 285 Spoken Language Processing
Speech Recognition and Synthesis
Lecture 5 POS Tagging Methods
Word classes and part of speech tagging
CSCI 5832 Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Hidden Markov Models IP notice: slides from Dan Jurafsky.
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
CSCI 5832 Natural Language Processing
Lecture 6: Part of Speech Tagging (II): October 14, 2004 Neal Snider
Natural Language Processing
Presentation transcript:

Hidden Markov Models IP notice: slides from Dan Jurafsky

Outline Markov Chains Hidden Markov Models Three Algorithms for HMMs The Forward Algorithm The Viterbi Algorithm The Baum-Welch (EM Algorithm) Applications: The Ice Cream Task Part of Speech Tagging

Definitions A weighted finite-state automaton An FSA with probabilities on the arcs The sum of the probabilities leaving any arc must sum to one A Markov chain (or observable Markov Model) a special case of a WFST in which the input sequence uniquely determines which states the automaton will go through Markov chains can’t represent inherently ambiguous problems Useful for assigning probabilities to unambiguous sequences

Weighted Finite State Transducer FST: FSA whose state transitions are labeled with both input and output symbols. A weighted transducer puts weights on transitions in addition to the input and output symbols Weights may encode probabilities, durations, penalties, ... Used in speech recognition Tutorial at http://www.cs.nyu.edu/~mohri/pub/hbka.pdf

Markov chain for weather All arcs leaving a state must sum to 1: Example: a21+a22+a23+a24=1

Markov chain for words

Markov chain = “First-order observable Markov Model” a set of states Q = q1, q2…qN; the state at time t is qt Transition probabilities: a set of probabilities A = a01a02…an1…ann. Each aij represents the probability of transitioning from state i to state j The set of these is the transition probability matrix A Distinguished start and end states

Markov chain = “First-order observable Markov Model” Markov Assumption: Current state only depends on previous state P(qi | q1 … qi-1) = P(qi | qi-1)

Another representation for start state Instead of start state Special initial probability vector p An initial distribution over probability of start states Constraints:

The weather model using p

The weather model: specific example

Markov chain for weather What is the probability of 4 consecutive warm days? Sequence is warm-warm-warm-warm i.e., state sequence is 3-3-3-3 P(3, 3, 3, 3) = 3a33a33a33 = 0.2 x (0.6)3 = 0.0432

How about? Hot hot hot hot Cold hot cold hot What does the difference in these probabilities tell you about the real world weather info encoded in the figure? Hot-hot-hot-hot=0.5*0.5*0.5*0.5=0.03125 Cold-hot-cold-hot=0.5**0.2*0.2*0.2=0.004 Hot-hot-hot-hot=0.5*0.5*0.5*0.5=0.03125 Cold-hot-cold-hot=0.5**0.2*0.2*0.2=0.004

So far the states are visible To model more complex transitions we might need to use hidden markov models where states are hidden and we only make observations.

HMM for Ice Cream You are a climatologist in the year 2799 Studying global warming You can’t find any records of the weather in Baltimore, MD for summer of 2008 But you find Jason Eisner’s diary Which lists how many ice-creams Jason ate every date that summer Our job: figure out how hot it was

Hidden Markov Model For Markov chains, the output symbols are the same as the states. See hot weather: we’re in state hot But in named-entity or part-of-speech tagging (and speech recognition and other things) The output symbols are words But the hidden states are something else Part-of-speech tags Named entity tags So we need an extension! A Hidden Markov Model is an extension of a Markov chain in which the input symbols are not the same as the states. This means we don’t know which state we are in.

Hidden Markov Models

Assumptions Markov assumption: P(qi | q1 … qi-1) = P(qi | qi-1) Output-independence assumption the current state is dependent only on the previous state. this represents the memory of the model the output observation at time t is dependent only on the current state it is independent of previous observations and states Markov assumption: the state transition depends only on the origin and destination Output-independent assumption: all observation frames are dependent on the state that generated them, not on neighbouring observation frames, Probability of generating ot as output depends only on the transition ij, not on previous outputs

The Jason Eisner task (cont.) Given a sequence of observations O, each observation an integer = number of ice creams eaten Figure out correct hidden sequence Q of weather states (H or C) which caused Jason to eat the ice cream In other words: Given: Ice Cream Observation Sequence: 1,2,3,2,2,2,3… Produce: Weather Sequence: H,C,H,H,H,C…

An HMM for the Jason Eisner task Relating numbers of ice creams eaten by Jason (the observations) to the weather (the hidden variables)

Different types of HMM structure Left-right or Bakis mode: no transitions are allowed to states whose indices are lower than the current state aij = 0; ∀j < i Left-right models are best suited to model signals whose properties change over time, such as speech When using left-right models, some additional constraints are commonly placed, such as preventing large transitions aij = 0 ∀j > i + Δ Ergodic a fully connected model each state can be reached in one step from every other state most general type of HMM Bakis = left-to-right Ergodic = fully-connected

The Three Basic Problems for HMMs Three fundamental problems by Jack Ferguson at IDA in the 1960s Given a specific HMM, determine likelihood of observation sequence. Given an observation sequence and an HMM, discover the most probable hidden state sequence Given only an observation sequence, learn the HMM parameters (A, B matrix)

The Three Basic Problems for HMMs : more formally Problem 1 (Evaluation): Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we efficiently compute P(O| ), the probability of the observation sequence, given the model Problem 2 (Decoding): Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we choose a corresponding state sequence Q=(q1q2…qT) that is optimal in some sense (i.e., best explains the observations) Problem 3 (Learning): How do we adjust the model parameters  = (A,B) to maximize P(O|  )? Evaluation problem. Given the HMM M=(A, B, ) and the observation sequence O=o1 o2 ... oK , calculate the probability that model M has generated sequence O . Decoding problem. Given the HMM M=(A, B, ) and the observation sequence O=o1 o2 ... oK , calculate the most likely sequence of hidden states si that produced this observation sequence O. Learning problem. Given some training observation sequences O=o1 o2 ... oK and general structure of HMM (numbers of hidden and visible states), adjust M=(A, B, ) to maximize the probability. O=o1...oK denotes a sequence of observations ok{v1,…,vM}.

Problem 1: computing the observation likelihood Given the following HMM: How likely is the sequence 3 1 3?

How to compute likelihood For a Markov chain, we just follow the states 3 1 3 and multiply the probabilities But for an HMM, we don’t know what the states are! So let’s start with a simpler situation. Computing the observation likelihood for a given hidden state sequence Suppose we knew the weather and wanted to predict how much ice cream Jason would eat. i.e. P( 3 1 3 | H H C)

Computing likelihood of 3 1 3 given hidden state sequence

Computing joint probability of observation and state sequence ? ? ?

Computing total likelihood of 3 1 3 We would need to sum over Hot hot cold Hot hot hot Hot cold hot …. How many possible hidden state sequences are there for this sequence? How about in general for an HMM with N hidden states and a sequence of T observations? NT So we can’t just do separate computation for each hidden state sequence.

Instead: the Forward algorithm A kind of dynamic programming algorithm Just like Minimum Edit Distance Uses a table to store intermediate values Idea: Compute the likelihood of the observation sequence By summing over all possible hidden state sequences But doing this efficiently By folding all the sequences into a single trellis In mathematics, computer science, economics, and bioinformatics, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems[1] and optimal substructure (described below). When applicable, the method takes far less time than naive methods that don't take advantage of the subproblem overlap (like depth-first search). The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.

The forward algorithm P(o1, o2 … oT, qT = qF | l) The goal of the forward algorithm is to compute P(o1, o2 … oT, qT = qF | l) We’ll do this by recursion

at(j) = P(o1, o2 … ot, qt = j | l) The forward algorithm Each cell of the forward algorithm trellis at(j) Represents the probability of being in state j After seeing the first t observations Given the automaton Each cell thus expresses the following probability at(j) = P(o1, o2 … ot, qt = j | l)

The Forward Recursion

The Forward Trellis

We update each cell

The Forward Algorithm

Decoding Given an observation sequence And an HMM 3 1 3 And an HMM The task of the decoder To find the best hidden state sequence Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we choose a corresponding state sequence Q=(q1q2…qT) that is optimal in some sense (i.e., best explains the observations)

Decoding One possibility: Why not? Instead: For each hidden state sequence Q HHH, HHC, HCH, Compute P(O|Q) Pick the highest one Why not? NT Instead: The Viterbi algorithm Is again a dynamic programming algorithm Uses a similar trellis to the Forward algorithm

Viterbi intuition We want to compute the joint probability of the observation sequence together with the best state sequence

Viterbi Recursion

The Viterbi trellis

Viterbi intuition Process observation sequence left to right Filling out the trellis Each cell:

Viterbi Algorithm

Viterbi backtrace

Hidden Markov Models for Part of Speech Tagging

Part of speech tagging 8 (ish) traditional parts of speech Noun, verb, adjective, preposition, adverb, article, interjection, pronoun, conjunction, etc This idea has been around for over 2000 years (Dionysius Thrax of Alexandria, c. 100 B.C.) Called: parts-of-speech, lexical category, word classes, morphological classes, lexical tags, POS We’ll use POS most frequently I’ll assume that you all know what these are

POS examples N noun chair, bandwidth, pacing V verb study, debate, munch ADJ adj purple, tall, ridiculous ADV adverb unfortunately, slowly, P preposition of, by, to PRO pronoun I, me, mine DET determiner the, a, that, those

POS Tagging example the DET koala N put V the DET keys N on P table N WORD tag the DET koala N put V the DET keys N on P table N

POS Tagging Words often have more than one POS: back The back door = JJ (adjective) On my back = NN Win the voters back = RB (adverb) Promised to back the bill = VB The POS tagging problem is to determine the POS tag for a particular instance of a word. These examples from Dekang Lin

POS tagging as a sequence classification task We are given a sentence (an “observation” or “sequence of observations”) Secretariat is expected to race tomorrow She promised to back the bill What is the best sequence of tags which corresponds to this sequence of observations? Probabilistic view: Consider all possible sequences of tags Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn.

Getting to HMM We want, out of all sequences of n tags t1…tn the single tag sequence such that P(t1…tn|w1…wn) is highest. Hat ^ means “our estimate of the best one” Argmaxx f(x) means “the x such that f(x) is maximized”

Getting to HMM This equation is guaranteed to give us the best tag sequence But how to make it operational? How to compute this value? Intuition of Bayesian classification: Use Bayes rule to transform into a set of other probabilities that are easier to compute

Using Bayes Rule

Likelihood and prior n

Two kinds of probabilities (1) Tag transition probabilities P(ti|ti-1) Determiners likely to precede adjs and nouns That/DT flight/NN The/DT yellow/JJ hat/NN So we expect P(NN|DT) and P(JJ|DT) to be high But P(DT|JJ) to be: Compute P(NN|DT) by counting in a labeled corpus:

Two kinds of probabilities (2) Word likelihood probabilities p(wi|ti) VBZ (3sg Pres verb) likely to be “is” Compute P(is|VBZ) by counting in a labeled corpus:

POS tagging: likelihood and prior

An Example: the verb “race” Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NR People/NNS continue/VB to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN How do we pick the right tag?

Disambiguating “race”

P(NN|TO) = .00047 P(VB|TO) = .83 P(race|NN) = .00057 P(race|VB) = .00012 P(NR|VB) = .0027 P(NR|NN) = .0012 P(VB|TO)P(NR|VB)P(race|VB) = .00000027 P(NN|TO)P(NR|NN)P(race|NN)=.00000000032 So we (correctly) choose the verb reading

Transitions between the hidden states of HMM, showing A probs

B observation likelihoods for POS HMM

The A matrix for the POS HMM

The B matrix for the POS HMM

Viterbi intuition: we are looking for the best ‘path’ Slide from Dekang Lin

Viterbi example