Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.

Slides:



Advertisements
Similar presentations
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Advertisements

Learning HMM parameters
1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215, BIO298, BIST520.
Introduction to Hidden Markov Models
Hidden Markov Models Eine Einführung.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Hidden Markov Models Ellen Walker Bioinformatics Hiram College, 2008.
Ch 9. Markov Models 고려대학교 자연어처리연구실 한 경 수
Statistical NLP: Lecture 11
Chapter 6: HIDDEN MARKOV AND MAXIMUM ENTROPY Heshaam Faili University of Tehran.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Hidden Markov Models in NLP
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Apaydin slides with a several modifications and additions by Christoph Eick.
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
… Hidden Markov Models Markov assumption: Transition model:
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models & EM Data-Intensive Information Processing Applications ― Session #8 Nitin Madnani University of Maryland Tuesday, March 30, 2010.
Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Doug Downey, adapted from Bryan Pardo,Northwestern University
Hidden Markov Models David Meir Blei November 1, 1999.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Hidden Markov Models CMSC 723: Computational Linguistics I ― Session #5 Jimmy Lin The iSchool University of Maryland Wednesday, September 30, 2009.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
. Class 5: Hidden Markov Models. Sequence Models u So far we examined several probabilistic model sequence models u These model, however, assumed that.
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
Hidden Markov Models CMSC 723: Computational Linguistics I ― Session #5 Jimmy Lin The iSchool University of Maryland Wednesday, September 30, 2009.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Hidden Markov Models BMI/CS 776 Mark Craven March 2002.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
S. Salzberg CMSC 828N 1 Three classic HMM problems 2.Decoding: given a model and an output sequence, what is the most likely state sequence through the.
NLP. Introduction to NLP Sequence of random variables that aren’t independent Examples –weather reports –text.
Hidden Markov Models & POS Tagging Corpora and Statistical Methods Lecture 9.
PGM 2003/04 Tirgul 2 Hidden Markov Models. Introduction Hidden Markov Models (HMM) are one of the most common form of probabilistic graphical models,
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Learning Sequence Motifs Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 Mark Craven
Hidden Markov Models (HMMs) –probabilistic models for learning patterns in sequences (e.g. DNA, speech, weather, cards...) (2 nd order model)
Albert Gatt Corpora and Statistical Methods. Acknowledgement Some of the examples in this lecture are taken from a tutorial on HMMs by Wolgang Maass.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
Big Data Infrastructure Week 9: Data Mining (4/4) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Hidden Markov Models BMI/CS 576
CSC 594 Topics in AI – Natural Language Processing
Hidden Markov Models Part 2: Algorithms
Presentation transcript:

Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See for details

Source: Wikipedia (The Scream)

Today’s Agenda Sequence labeling Midterm Final projects

Sequences Everywhere! Examples of sequences Waveforms in utterances Letters in words Words in sentences Bases in DNA Proteins in amino acids Actions in video Closing prices in the stock market All share the property of having structural dependencies

Sequence Labeling Given a sequence of observations, assign a label to each observation Observations can be discrete or continuous, scalar or vector Assume labels are drawn from a discrete finite set Sample applications: Speech recognition POS tagging Handwriting recognition Video analysis Protein secondary structure detection

Approaches Treat as a classification problem? Make decisions about each observation independently Can we do better? As with before, lots of background material before we get to MapReduce!

MapReduce Implementation: Mapper

MapReduce Implementation: Reducer

Finite State Machines Q: a finite set of N states Q = {q 0, q 1, q 2, q 3, …} The start state: q 0 The set of final states: q F Σ: a finite input alphabet of symbols δ(q,i): transition function Given state q and input symbol i, transition to new state q'

Finite number of states

Transitions

Input alphabet

Start state

Final state(s)

What can we do with FSMs? Generate valid sequences Accept valid sequences

Weighted FSMs FSMs treat all transitions as equally likely What if we know more about state transitions? ‘a’ is twice as likely to be seen in state 1 as ‘b’ or ‘c’ ‘c’ is three times as likely to be seen in state 2 as ‘a’ What do we get out of it? score(‘ab’) = 2 score(‘bc’) =

Probabilistic FSMs Let’s replace weights with probabilities What if we know more about state transitions? ‘a’ is twice as likely to be seen in state 1 as ‘b’ or ‘c’ ‘c’ is three times as likely to be seen in state 2 as ‘a’ What do we get out of it? P(‘ab’) = 0.5 P(‘bc’) = = Markov Chains

Specifying Markov Chains Q: a finite set of N states Q = {q 0, q 1, q 2, q 3, …} The start state An explicit start state: q 0 Alternatively, a probability distribution over start states: {π 1, π 2, π 3, …}, Σ π i = 1 The set of final states: q F N  N Transition probability matrix A = [a ij ] a ij = P(q j |q i ), Σ a ij = 1  i

Let’s model the stock market… What’s special about this FSM? Present state only depends on the previous state! The (1st order) Markov assumption What’s missing?Add “priors” Each state corresponds to a physical state in the world

Are states always observable ? Day: ↑ ↓ ↔ ↑: Market is up ↓: Market is down ↔: Market hasn’t changed B ull B ear S B ull S Bull:Bull Market Bear: Bear Market S: Static Market Not observable ! Here’s what you actually observe:

Hidden Markov Models Markov chains aren’t enough! What if you can’t directly observe the states? We need to model problems where observations don’t directly correspond to states… Solution: Hidden Markov Model (HMM) Assume two probabilistic processes Underlying process (state transition) is hidden Second process generates sequence of observed events

Specifying HMMs An HMM is characterized by: N states: N x N Transition probability matrix V observation symbols: N x |V| Emission probability matrix Prior probabilities vector

Stock Market HMM States? ✓ Transitions? Vocabulary? Emissions? Priors?

Stock Market HMM States? ✓ Transitions? ✓ Vocabulary? Emissions? Priors?

Stock Market HMM States? ✓ Transitions? ✓ Vocabulary? ✓ Emissions? Priors?

Stock Market HMM States? ✓ Transitions? ✓ Vocabulary? ✓ Emissions? Priors? ✓

Stock Market HMM π 1 =0.5 π 2 =0.2 π 3 =0.3 States? ✓ Transitions? ✓ Vocabulary? ✓ Emissions? Priors? ✓ ✓

Properties of HMMs Markov assumption: Output independence: Remember, these are distinct: Number of states (N) The number of distinct observation symbols (V) The length of the sequence (T)

“Running” an HMM 1. Set t = 1, select initial state based on 2. Select an observation to emit based on 3. If t = T, then done 4. Select transition into a new state based on 5. Set t = t Goto step 2

HMMs: Three Problems Likelihood: Given an HMM λ and a sequence of observed events O, find the likelihood of observing the sequence Decoding: Given an HMM λ and an observation sequence O, find the most likely (hidden) state sequence Learning: Given a set of observation sequences, learn the parameters A and B for λ Okay, but where did the structure of the HMM come from?

HMM Problem #1: Likelihood

Computing Likelihood Given this model of the stock market, how likely are we to observe the sequence of outputs? π 1 =0.5 π 2 =0.2 π 3 =0.3 t O↑↓↔↑↓↔

Computing Likelihood Easy, right? Sum over all possible ways in which we could generate O from λ What’s the problem? Right idea, wrong algorithm! Takes O(N T ) time to compute!

Computing Likelihood What are we doing wrong? State sequences may have a lot of overlap We’re recomputing the shared subsequences every time Let’s store intermediate results and reuse them Sounds like a job for dynamic programming!

Forward Algorithm Forward probability Probability of being in state j after t observations Build an N  T trellis Intuition: forward probability only depends on the previous state and observation … … … … … …

Forward Algorithm Initialization Recursion Termination..

Forward Algorithm ↑ ↓ ↑O = find

Forward Algorithm time ↑↓↑ t=1t=2t=3 Bear Bull Static states

Forward Algorithm: Initialization α 1 (Bull) α 1 (Bear) α 1 (Static) time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 =0.09 Bear Bull Static states

Forward Algorithm: Recursion 0.14  0.6  0.1=  0.5  0.1=  0.4  0.1= α 1 (Bull)  a BullBull  b Bull (↓).... and so on time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 = Bear Bull Static states

Forward Algorithm: Recursion time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 = Bear Bull Static states

Forward Algorithm: Termination time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 = P(O) = Bear Bull Static states

HMM Problem #2: Decoding

Decoding Given this model of the stock market, what are the most likely states the market went through to produce O? π 1 =0.5 π 2 =0.2 π 3 =0.3 t O↑↓↔↑↓↔

Decoding “Decoding” because states are hidden First try: Compute P(O) for all possible state sequences, then choose sequence with highest probability What’s the problem here? Second try: For each possible hidden state sequence, compute P(O) using the forward algorithm What’s the problem here?

Viterbi Algorithm “Decoding” = computing most likely state sequence Another dynamic programming algorithm Efficient: polynomial vs. exponential Same idea as the forward algorithm Store intermediate computation results in a trellis Build new cells from existing cells

Viterbi Algorithm Viterbi probability Probability of being in state j after seeing t observations and passing through the most likely state sequence so far Build an N  T trellis Intuition: forward probability only depends on the previous state and observation … … … … … …

Viterbi vs. Forward Maximization instead of summation over previous paths This algorithm is still missing something! In forward algorithm, we only care about the probabilities What’s different here? We need to store the most likely path (transition): Use “backpointers” to keep track of most likely transition At the end, follow the chain of backpointers to recover the most likely state sequence

Viterbi Algorithm: Formal Definition Initialization Recursion Termination Why no b() ?

Viterbi Algorithm ↑ ↓ ↑O = find most likely state sequence

Viterbi Algorithm time ↑↓↑ t=1t=2t=3 Bear Bull Static states

Viterbi Algorithm: Initialization v 1 (Bull) v 1 (Bear) v 1 (Static) time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 =0.09 Bear Bull Static states

Viterbi Algorithm: Recursion 0.14  0.6  0.1=  0.5  0.1=  0.4  0.1= Max v 1 (Bull)  a BullBull  b Bull (↓) time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 = Bear Bull Static states

Viterbi Algorithm: Recursion.... and so on time ↑↓↑ t=1t=2t=3 0.2  0.7=  0.1 =  0.3 = Bear Bull Static states store backpointer

Viterbi Algorithm: Recursion time ↑↓↑ t=1t=2t=3 Bear Bull Static states 0.2  0.7=  0.1 =  0.3 =

Viterbi Algorithm: Termination time ↑↓↑ t=1t=2t=3 Bear Bull Static states 0.2  0.7=  0.1 =  0.3 =

Viterbi Algorithm: Termination time ↑↓↑ t=1t=2t=3 Bear Bull Static states 0.2  0.7=  0.1 =  0.3 = Most likely state sequence: [ Bull, Bear, Bull ], P =

HMM Problem #3: Learning

The Problem Given training data, learn parameters for the HMM A: transition probabilities B: emission probabilities Two cases: Supervised training: sequences where observations are paired with states Unsupervised training: sequences with observations only

Supervised Training Easy! Just like relative frequency counting Compute maximum likelihood estimates directly from data Transition probabilities: Emission probabilities:

Supervised Training Easy! Just like relative frequency counting Remember the usual MapReduce tricks: Pairs vs. stripes for keep track of joint counts Combiners and in-mapper combining apply

Unsupervised Training Hard! Actually, it’s not that bad… Pseudo-counts instead of counts More complex bookkeeping

Unsupervised Training What’s the fundamental problem? Chick-and-egg problem: If we knew what the hidden variables were, then computing the model parameters would be easy If we knew the model parameters, we’d be able to compute the hidden variables (but of course, that’s the whole point) Where have we seen this before?

EM to the Rescue! Start with initial parameters Iterate until convergence: E-step: Compute posterior distribution over latent (hidden) variables given the model parameters M-step: Update model parameters using posterior distribution computed in the E-step How does this work for HMMs? First, we need one more definition…

Backward Algorithm Backward probability Probability of seeing observations from time t+1 to T given that we’re in state j at time t Guess what? Build an N  T trellis … … … … … …

Backward Algorithm Initialization Recursion Termination..

Backward Algorithm time ↑↓↑ t=1t=2t=3 Bear Bull Static states

1.0 Backward Algorithm: Initialization time ↑↓↑ t=1t=2t=3 Bear Bull Static states

1.0 Backward Algorithm: Recursion time ↑↓↑ t=1t=2t=3 Bear Bull Static states and so on

Backward Algorithm: Recursion time ↑↓↑ t=1t=2t= Bear Bull Static states

Forward-Backward Forward probability Probability of being in state j after t observations Backward probability Probability of seeing observations from time t+1 to T given that we’re in state j at time t Neat Observation:

Forward-Backward time ↑↓↑ t=1t=2t= Bear Bull Static states

Forward-Backward..

Estimating Emissions Probabilities Basic idea: Let’s define: Thus: b j (v k ) = expected number of times in state j and observing symbol v k expected number of times in state j

Forward-Backward..

Estimating Transition Probabilities Basic idea: Let’s define: Thus: a ij = expected number of transitions from state i to state j expected number of transitions from state i

MapReduce Implementation: Mapper

MapReduce Implementation: Reducer

Implementation Notes Standard setup of iterative MapReduce algorithms Driver program sets up MapReduce job Waits for completion Checks for convergence Repeats if necessary Additional details: Must load and serialize model parameters at each iteration Reducer receives 2N + 1 keys, so limits to reducer parallelization Iteration overhead less than other iterative algorithms

Source: Wikipedia (Japanese rock garden) Questions?

Midterm

Final Projects Anything in the big data space Teams of 2-3 people Available data resources Available cluster resources Project Ideas?