Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.

Slides:



Advertisements
Similar presentations
Large Vocabulary Unconstrained Handwriting Recognition J Subrahmonia Pen Technologies IBM T J Watson Research Center.
Advertisements

Learning HMM parameters
Hidden Markov Model 主講人:虞台文 大同大學資工所 智慧型多媒體研究室. Contents Introduction – Markov Chain – Hidden Markov Model (HMM) Formal Definition of HMM & Problems Estimate.
Introduction to Hidden Markov Models
Hidden Markov Models Eine Einführung.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Ch 9. Markov Models 고려대학교 자연어처리연구실 한 경 수
Statistical NLP: Lecture 11
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Hidden Markov Models (HMMs) Steven Salzberg CMSC 828H, Univ. of Maryland Fall 2010.
Part II. Statistical NLP Advanced Artificial Intelligence (Hidden) Markov Models Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme Most.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Forward-backward algorithm LING 572 Fei Xia 02/23/06.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Doug Downey, adapted from Bryan Pardo,Northwestern University
Hidden Markov Models David Meir Blei November 1, 1999.
Hidden Markov models Sushmita Roy BMI/CS 576 Oct 16 th, 2014.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Combined Lecture CS621: Artificial Intelligence (lecture 25) CS626/449: Speech-NLP-Web/Topics-in- AI (lecture 26) Pushpak Bhattacharyya Computer Science.
Ch10 HMM Model 10.1 Discrete-Time Markov Process 10.2 Hidden Markov Models 10.3 The three Basic Problems for HMMS and the solutions 10.4 Types of HMMS.
CS 4705 Hidden Markov Models Julia Hirschberg CS4705.
7-Speech Recognition Speech Recognition Concepts
Part of Speech Tagging & Hidden Markov Models Mitch Marcus CSE 391.
HMM - Basics.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Definitions Stochastic process –A process of change based on the probabilities of an indexed collection of random variables {X i } –Each X i take on a.
Hidden Markov Models for Information Extraction CSE 454.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
S. Salzberg CMSC 828N 1 Three classic HMM problems 2.Decoding: given a model and an output sequence, what is the most likely state sequence through the.
NLP. Introduction to NLP Sequence of random variables that aren’t independent Examples –weather reports –text.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models & POS Tagging Corpora and Statistical Methods Lecture 9.
PGM 2003/04 Tirgul 2 Hidden Markov Models. Introduction Hidden Markov Models (HMM) are one of the most common form of probabilistic graphical models,
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2005 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,... Si Sj.
Albert Gatt Corpora and Statistical Methods. Acknowledgement Some of the examples in this lecture are taken from a tutorial on HMMs by Wolgang Maass.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Hidden Markov Models Wassnaa AL-mawee Western Michigan University Department of Computer Science CS6800 Adv. Theory of Computation Prof. Elise De Doncker.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
Hidden Markov Models BMI/CS 576
Learning, Uncertainty, and Information: Learning Parameters
Hidden Markov Models for Information Extraction
CSC 594 Topics in AI – Natural Language Processing
Speech Processing Speech Recognition
LECTURE 15: REESTIMATION, EM AND MIXTURES
Introduction to HMM (cont)
Presentation transcript:

Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004

Hidden Markov Model (HMM) HMMs allow you to estimate probabilities of unobserved events Given plain text, which underlying parameters generated the surface E.g., in speech recognition, the observed data is the acoustic signal and the words are the hidden parameters

HMMs and their Usage HMMs are very common in Computational Linguistics: Speech recognition (observed: acoustic signal, hidden: words) Handwriting recognition (observed: image, hidden: words) Part-of-speech tagging (observed: words, hidden: part-of-speech tags) Machine translation (observed: foreign words, hidden: words in target language)

Noisy Channel Model In speech recognition you observe an acoustic signal (A=a 1,…,a n ) and you want to determine the most likely sequence of words (W=w 1,…,w n ): P(W | A) Problem: A and W are too specific for reliable counts on observed data, and are very unlikely to occur in unseen data

Noisy Channel Model Assume that the acoustic signal (A) is already segmented wrt word boundaries P(W | A) could be computed as Problem: Finding the most likely word corresponding to a acoustic representation depends on the context E.g., /'pre-z & ns / could mean “presents” or “presence” depending on the context

Noisy Channel Model Given a candidate sequence W we need to compute P(W) and combine it with P(W | A) Applying Bayes’ rule: The denominator P(A) can be dropped, because it is constant for all W

7 Noisy Channel in a Picture

Decoding The decoder combines evidence from The likelihood: P(A | W) This can be approximated as: The prior: P(W) This can be approximated as:

Search Space Given a word-segmented acoustic sequence list all candidates Compute the most likely path 'botik-'spen-siv'pre-z & ns boatexcessivepresidents baldexpensivepresence boldexpressivepresents boughtinactivepress

Markov Assumption The Markov assumption states that probability of the occurrence of word w i at time t depends only on occurrence of word w i-1 at time t-1 Chain rule: Markov assumption:

The Trellis

Parameters of an HMM States: A set of states S=s 1,…,s n Transition probabilities: A= a 1,1,a 1,2,…,a n,n Each a i,j represents the probability of transitioning from state s i to s j. Emission probabilities: A set B of functions of the form b i (o t ) which is the probability of observation o t being emitted by s i Initial state distribution: is the probability that s i is a start state

The Three Basic HMM Problems Problem 1 (Evaluation): Given the observation sequence O=o 1,…,o T and an HMM model, how do we compute the probability of O given the model? Problem 2 (Decoding): Given the observation sequence O=o 1,…,o T and an HMM model, how do we find the state sequence that best explains the observations?

Problem 3 (Learning): How do we adjust the model parameters, to maximize ? The Three Basic HMM Problems

Problem 1: Probability of an Observation Sequence What is ? The probability of a observation sequence is the sum of the probabilities of all possible state sequences in the HMM. Naïve computation is very expensive. Given T observations and N states, there are N T possible state sequences. Even small HMMs, e.g. T=10 and N=10, contain 10 billion different paths Solution to this and problem 2 is to use dynamic programming

Forward Probabilities What is the probability that, given an HMM, at time t the state is i and the partial observation o 1 … o t has been generated?

Forward Probabilities

Forward Algorithm Initialization: Induction: Termination:

Forward Algorithm Complexity In the naïve approach to solving problem 1 it takes on the order of 2T*N T computations The forward algorithm takes on the order of N 2 T computations

Backward Probabilities Analogous to the forward probability, just in the other direction What is the probability that given an HMM and given the state at time t is i, the partial observation o t+1 … o T is generated?

Backward Probabilities

Backward Algorithm Initialization: Induction: Termination:

Problem 2: Decoding The solution to Problem 1 (Evaluation) gives us the sum of all paths through an HMM efficiently. For Problem 2, we wan to find the path with the highest probability. We want to find the state sequence Q=q 1 …q T, such that

Viterbi Algorithm Similar to computing the forward probabilities, but instead of summing over transitions from incoming states, compute the maximum Forward: Viterbi Recursion:

Viterbi Algorithm Initialization: Induction: Termination: Read out path:

Problem 3: Learning Up to now we’ve assumed that we know the underlying model Often these parameters are estimated on annotated training data, which has two drawbacks: Annotation is difficult and/or expensive Training data is different from the current data We want to maximize the parameters with respect to the current data, i.e., we’re looking for a model, such that

Problem 3: Learning Unfortunately, there is no known way to analytically find a global maximum, i.e., a model, such that But it is possible to find a local maximum Given an initial model, we can always find a model, such that

Parameter Re-estimation Use the forward-backward (or Baum- Welch) algorithm, which is a hill-climbing algorithm Using an initial parameter instantiation, the forward-backward algorithm iteratively re-estimates the parameters and improves the probability that given observation are generated by the new parameters

Parameter Re-estimation Three parameters need to be re- estimated: Initial state distribution: Transition probabilities: a i,j Emission probabilities: b i (o t )

Re-estimating Transition Probabilities What’s the probability of being in state s i at time t and going to state s j, given the current model and parameters?

Re-estimating Transition Probabilities

The intuition behind the re-estimation equation for transition probabilities is Formally:

Re-estimating Transition Probabilities Defining As the probability of being in state s i, given the complete observation O We can say:

Review of Probabilities Forward probability: The probability of being in state s i, given the partial observation o 1,…,o t Backward probability: The probability of being in state s i, given the partial observation o t+1,…,o T Transition probability: The probability of going from state s i, to state s j, given the complete observation o 1,…,o T State probability: The probability of being in state s i, given the complete observation o 1,…,o T

Re-estimating Initial State Probabilities Initial state distribution: is the probability that s i is a start state Re-estimation is easy: Formally:

Re-estimation of Emission Probabilities Emission probabilities are re-estimated as Formally: Where Note that here is the Kronecker delta function and is not related to the in the discussion of the Viterbi algorithm!!

The Updated Model Coming from we get to by the following update rules:

Expectation Maximization The forward-backward algorithm is an instance of the more general EM algorithm The E Step: Compute the forward and backward probabilities for a give model The M Step: Re-estimate the model parameters