Download presentation
Presentation is loading. Please wait.
Published byImogene Hunt Modified over 9 years ago
2
Hidden Markov Models: Probabilistic Reasoning Over Time Artificial Intelligence CMSC 25000 February 26, 2008
3
Agenda Hidden Markov Models –Uncertain observation –Temporal Context –Recognition: Viterbi –Training the model: Baum-Welch Speech Recognition –Framing the problem: Sounds to Sense –Speech Recognition as Modern AI
4
Modelling Processes over Time Infer underlying state sequence from observed Issue: New state depends on preceding states –Analyzing sequences Problem 1: Possibly unbounded # prob tables –Observation+State+Time Solution 1: Assume stationary process –Rules governing process same at all time Problem 2: Possibly unbounded # parents –Markov assumption: Only consider finite history –Common: 1 or 2 Markov: depend on last couple
5
Hidden Markov Models (HMMs) An HMM is: –1) A set of states: –2) A set of transition probabilities: Where aij is the probability of transition qi -> qj –3)Observation probabilities: The probability of observing ot in state i –4) An initial probability dist over states: The probability of starting in state i –5) A set of accepting states
6
Three Problems for HMMs Find the probability of an observation sequence given a model –Forward algorithm Find the most likely path through a model given an observed sequence –Viterbi algorithm (decoding) Find the most likely model (parameters) given an observed sequence –Baum-Welch (EM) algorithm
7
Bins and Balls Example Assume there are two bins filled with red and blue balls. Behind a curtain, someone selects a bin and then draws a ball from it (and replaces it). They then select either the same bin or the other one and then select another ball… –(Example due to J. Martin)
8
Bins and Balls Example Bin 1Bin 2.6.7.4.3
9
Bins and Balls Bin1Bin2 Bin10.60.4 Bin20.30.7 Π Bin 1: 0.9; Bin 2: 0.1 A B Bin 1Bin 2 Red0.70.4 Blue0.30.6
10
Bins and Balls Assume the observation sequence: – Blue Blue Red (BBR) Both bins have Red and Blue –Any state sequence could produce observations However, NOT equally likely –Big difference in start probabilities –Observation depends on state –State depends on prior state
11
Bins and Balls Blue Blue Red 1 1 1(0.9*0.3)*(0.6*0.3)*(0.6*0.7)=0.0204 1 1 2(0.9*0.3)*(0.6*0.3)*(0.4*0.4)=0.0077 1 2 1(0.9*0.3)*(0.4*0.6)*(0.3*0.7)=0.0136 1 2 2(0.9*0.3)*(0.4*0.6)*(0.7*0.4)=0.0181 2 1 1(0.1*0.6)*(0.3*0.7)*(0.6*0.7)=0.0052 2 1 2(0.1*0.6)*(0.3*0.7)*(0.4*0.4)=0.0020 2 2 1(0.1*0.6)*(0.7*0.6)*(0.3*0.7)=0.0052 2 2 2(0.1*0.6)*(0.7*0.6)*(0.7*0.4)=0.0070
12
Answers and Issues Here, to compute probability of observed –Just add up all the state sequence probabilities To find most likely state sequence –Just pick the sequence with the highest value Problem: Computing all paths expensive –2T*N^T Solution: Dynamic Programming –Sweep across all states at each time step Summing (Problem 1) or Maximizing (Problem 2)
13
Forward Probability Where α is the forward probability, t is the time in utterance, i,j are states in the HMM, aij is the transition probability, bj(ot) is the probability of observing ot in state bj N is the max state, T is the last time
14
Forward Algorithm Idea: matrix where each cell forward[t,j] represents probability of being in state j after seeing first t observations. Each cell expresses the probability: forward[t,j] = P(o 1,o 2,...,o t,q t =j|w) q t = j means "the probability that the t th state in the sequence of states is state j. Compute probability by summing over extensions of all paths leading to current cell. An extension of a path from a state i at time t-1 to state j at t is computed by multiplying together: i. previous path probability from the previous cell forward[t- 1,i], ii. transition probability a ij from previous state i to current state j iii. observation likelihood b jt that current state j matches observation symbol t.
15
Forward Algorithm Function Forward(observations length T, state-graph) returns best-path Num-states<-num-of-states(state-graph) Create path prob matrix forwardi[num-states+2,T+2] Forward[0,0]<- 1.0 For each time step t from 0 to T do for each state s from 0 to num-states do for each transition s’ from s in state-graph new-score<-Forward[s,t]*at[s,s’]*bs’(ot) Forward[s’,t+1] <- Forward[s’,t+1]+new-score
16
Viterbi Algorithm Find BEST sequence given signal –Best P(sequence|signal) –Take HMM & observation sequence => seq (prob) Dynamic programming solution –Record most probable path ending at a state i Then most probable path from i to end O(bMn)
17
Viterbi Code Function Viterbi(observations length T, state-graph) returns best-path Num-states<-num-of-states(state-graph) Create path prob matrix viterbi[num-states+2,T+2] Viterbi[0,0]<- 1.0 For each time step t from 0 to T do for each state s from 0 to num-states do for each transition s’ from s in state-graph new-score<-viterbi[s,t]*at[s,s’]*bs’(ot) if ((viterbi[s’,t+1]==0) || (viterbi[s’,t+1]<new-score)) then viterbi[s’,t+1] <- new-score back-pointer[s’,t+1]<-s Backtrace from highest prob state in final column of viterbi[] & return
18
Learning HMMs Issue: Where do the probabilities come from? Solution: Learn from data –Trains transition (aij) and emission (bj) probabilities Typically assume structure –Baum-Welch aka forward-backward algorithm Iteratively estimate counts of transitions/emitted Get estimated probabilities by forward comput’n –Divide probability mass over contributing paths
19
Learning HMMs Issue: Where do the probabilities come from? Supervised/manual construction Solution: Learn from data –Trains transition (aij), emission (bj), and initial (πi) probabilities Typically assume state structure is given –Unsupervised –Baum-Welch aka forward-backward algorithm Iteratively estimate counts of transitions/emitted Get estimated probabilities by forward comput’n –Divide probability mass over contributing paths
20
Manual Construction Manually labeled data –Observation sequences, aligned to –Ground truth state sequences Compute (relative) frequencies of state transitions Compute frequencies of observations/state Compute frequencies of initial states Bootstrapping: iterate tag, correct, reestimate, tag. Problem: –Labeled data is expensive, hard/impossible to obtain, may be inadequate to fully estimate Sparseness problems
21
Less Supervised Learning Re-estimation from unlabeled data –Baum-Welch aka forward-backward algorithm –Assume “representative” collection of data E.g. recorded speech, gene sequences, etc –Assign initial probabilities Or estimate from very small labeled sample –Compute state sequences given the data I.e. use forward algorithm –Update transition, emission, initial probabilities
22
Updating Probabilities Intuition: –Observations identify state sequences –Adjust probability of transitions/emissions –Make closer to those consistent with observed – Increase P(Observations|Model) Functionally –For each state i, what proportion of transitions from state i go to state j –For each state i, what proportion of observations match O? –How often is state i the initial state?
23
Estimating Transitions Consider updating transition aij –Compute probability of all paths using aij –Compute probability of all paths through i (w/ and w/o i->j) ij
24
Forward Probability Where α is the forward probability, t is the time in utterance, i,j are states in the HMM, aij is the transition probability, bj(ot) is the probability of observing ot in state bj N is the max state, T is the last time
25
Backward Probability Where β is the backward probability, t is the time in sequence, i,j are states in the HMM, aij is the transition probability, bj(ot) is the probability of observing ot in state bj N is the final state, and T is the last time
26
Re-estimating Estimate transitions from i->j Estimate observations in j Estimate initial i
27
Speech Recognition Goal: –Given an acoustic signal, identify the sequence of words that produced it –Speech understanding goal: Given an acoustic signal, identify the meaning intended by the speaker Issues: –Ambiguity: many possible pronunciations, –Uncertainty: what signal, what word/sense produced this sound sequence
28
Decomposing Speech Recognition Q1: What speech sounds were uttered? –Human languages: 40-50 phones Basic sound units: b, m, k, ax, ey, …(arpabet) Distinctions categorical to speakers –Acoustically continuous Part of knowledge of language –Build per-language inventory –Could we learn these?
29
Decomposing Speech Recognition Q2: What words produced these sounds? –Look up sound sequences in dictionary –Problem 1: Homophones Two words, same sounds: too, two –Problem 2: Segmentation No “space” between words in continuous speech “I scream”/”ice cream”, “Wreck a nice beach”/”Recognize speech” Q3: What meaning produced these words? –NLP (But that’s not all!)
31
Signal Processing Goal: Convert impulses from microphone into a representation that –is compact –encodes features relevant for speech recognition Compactness: Step 1 –Sampling rate: how often look at data 8KHz, 16KHz,(44.1KHz= CD quality) –Quantization factor: how much precision 8-bit, 16-bit (encoding: u-law, linear…)
32
(A Little More) Signal Processing Compactness & Feature identification –Capture mid-length speech phenomena Typically “frames” of 10ms (80 samples) –Overlapping –Vector of features: e.g. energy at some frequency –Vector quantization: n-feature vectors: n-dimension space –Divide into m regions (e.g. 256) –All vectors in region get same label - e.g. C256
33
Speech Recognition Model Question: Given signal, what words? Problem: uncertainty –Capture of sound by microphone, how phones produce sounds, which words make phones, etc Solution: Probabilistic model –P(words|signal) = – P(signal|words)P(words)/P(signal) –Idea: Maximize P(signal|words)*P(words) P(signal|words): acoustic model; P(words): lang model
34
Language Model Idea: some utterances more probable Standard solution: “n-gram” model –Typically tri-gram: P(wi|wi-1,wi-2) Collect training data –Smooth with bi- & uni-grams to handle sparseness –Product over words in utterance
35
Acoustic Model P(signal|words) –words -> phones + phones -> vector quantiz’n Words -> phones –Pronunciation dictionary lookup Multiple pronunciations? –Probability distribution »Dialect Variation: tomato »+Coarticulation –Product along path towmaaeytow 0.5 towmaaeytow 0.5 ax 0.50.2 0.8
36
Pronunciation Example Observations: 0/1
37
Acoustic Model P(signal| phones): –Problem: Phones can be pronounced differently Speaker differences, speaking rate, microphone Phones may not even appear, different contexts –Observation sequence is uncertain Solution: Hidden Markov Models –1) Hidden => Observations uncertain –2) Probability of word sequences => State transition probabilities –3) 1 st order Markov => use 1 prior state
38
Acoustic Model 3-state phone model for [m] –Use Hidden Markov Model (HMM) –Probability of sequence: sum of prob of paths OnsetMidEndFinal 0.7 0.30.9 0.1 0.4 0.6 C1: 0.5 C2: 0.2 C3: 0.3 C3: 0.2 C4: 0.7 C5: 0.1 C4: 0.1 C6: 0.5 C6: 0.4 Transition probabilities Observation probabilities
39
ASR Training Models to train: –Language model: typically tri-gram –Observation likelihoods: B –Transition probabilities: A –Pronunciation lexicon: sub-phone, word Training materials: –Speech files – word transcription –Large text corpus –Small phonetically transcribed speech corpus
40
Training Language model: –Uses large text corpus to train n-grams 500 M words Pronunciation model: –HMM state graph –Manual coding from dictionary Expand to triphone context and sub-phone models
41
HMM Training Training the observations: –E.g. Gaussian: set uniform initial mean/variance Train based on contents of small (e.g. 4hr) phonetically labeled speech set (e.g. Switchboard) Training A&B: –Forward-Backward algorithm training
42
Does it work? Yes: –99% on isolated single digits –95% on restricted short utterances (air travel) –89+% professional news broadcast No: –77% Conversational English –67% Conversational Mandarin (CER) –55% Meetings –?? Noisy cocktail parties
43
N-grams Perspective: –Some sequences (words/chars) are more likely than others –Given sequence, can guess most likely next Used in – Speech recognition – Spelling correction, –Augmentative communication –Other NL applications
44
Probabilistic Language Generation Coin-flipping models –A sentence is generated by a randomized algorithm The generator can be in one of several “states” Flip coins to choose the next state. Flip other coins to decide which letter or word to output
45
Shannon’s Generated Language 1. Zero-order approximation: –XFOML RXKXRJFFUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD 2. First-order approximation: –OCRO HLI RGWR NWIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH RBL 3. Second-order approximation: –ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIND ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE
46
Shannon’s Word Models 1. First-order approximation: –REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE 2. Second-order approximation: –THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED
47
Corpus Counts Estimate probabilities by counts in large collections of text/speech Issues: –Wordforms (surface) vs lemma (root) –Case? Punctuation? Disfluency? –Type (distinct words) vs Token (total)
48
Basic N-grams Most trivial: 1/#tokens: too simple! Standard unigram: frequency –# word occurrences/total corpus size E.g. the=0.07; rabbit = 0.00001 –Too simple: no context! Conditional probabilities of word sequences
49
Markov Assumptions Exact computation requires too much data Approximate probability given all prior wds –Assume finite history –Bigram: Probability of word given 1 previous First-order Markov –Trigram: Probability of word given 2 previous N-gram approximation Bigram sequence
50
Issues Relative frequency –Typically compute count of sequence Divide by prefix Corpus sensitivity –Shakespeare vs Wall Street Journal Very unnatural Ngrams –Unigram: little; bigrams: colloc;trigrams:phrase
51
Evaluating n-gram models Entropy & Perplexity –Information theoretic measures –Measures information in grammar or fit to data –Conceptually, lower bound on # bits to encode Entropy: H(X): X is a random var, p: prob fn –E.g. 8 things: number as code => 3 bits/trans –Alt. short code if high prob; longer if lower Can reduce Perplexity: –Weighted average of number of choices
52
Computing Entropy Picking horses (Cover and Thomas) Send message: identify horse - 1 of 8 –If all horses equally likely, p(i) = 1/8 –Some horses more likely: 1: ½; 2: ¼; 3: 1/8; 4: 1/16; 5,6,7,8: 1/64
53
Entropy of a Sequence Basic sequence Entropy of language: infinite lengths –Assume stationary & ergodic
54
Cross-Entropy Comparing models –Actual distribution unknown –Use simplified model to estimate Closer match will have lower cross-entropy
55
Perplexity Model Comparison Compare models with different history Train models –38 million words – Wall Street Journal Compute perplexity on held-out test set –1.5 million words (~20K unique, smoothed) N-gram Order |Perplexity – Unigram | 962 – Bigram | 170 – Trigram | 109
56
Entropy of English Shannon’s experiment –Subjects guess strings of letters, count guesses –Entropy of guess seq = Entropy of letter seq –1.3 bits; Restricted text Build stochastic model on text & compute –Brown computed trigram model on varied corpus –Compute (per-char) entropy of model –1.75 bits
57
Speech Recognition as Modern AI Draws on wide range of AI techniques –Knowledge representation & manipulation Optimal search: Viterbi decoding –Machine Learning Baum-Welch for HMMs Nearest neighbor & k-means clustering for signal id –Probabilistic reasoning/Bayes rule Manage uncertainty in signal, phone, word mapping Enables real world application
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.