CS 224S / LINGUIST 281 Speech Recognition, Synthesis, and Dialogue

Slides:



Advertisements
Similar presentations
Building an ASR using HTK CS4706
Advertisements

Natural Language Processing Lecture 8—9/24/2013 Jim Martin.
CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 3: ASR: HMMs, Forward, Viterbi.
Hidden Markov Models IP notice: slides from Dan Jurafsky.
Hidden Markov Models IP notice: slides from Dan Jurafsky.
Introduction to Hidden Markov Models
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Statistical NLP: Lecture 11
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Chapter 6: HIDDEN MARKOV AND MAXIMUM ENTROPY Heshaam Faili University of Tehran.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Model (HMM) Tagging  Using an HMM to do POS tagging  HMM is a special case of Bayesian inference.
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Hidden Markov Models in NLP
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Sequential Modeling with the Hidden Markov Model Lecture 9 Spoken Language Processing Prof. Andrew Rosenberg.
Albert Gatt Corpora and Statistical Methods Lecture 8.
INTRODUCTION TO Machine Learning 3rd Edition
Application of HMMs: Speech recognition “Noisy channel” model of speech.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 LSA 352 Summer 2007 LSA 352: Speech Recognition and Synthesis Dan Jurafsky Lecture 5: Intro to ASR+HMMs: Forward, Viterbi, Baum-Welch IP Notice:
Learning, Uncertainty, and Information Big Ideas November 8, 2004.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
ASR Evaluation Julia Hirschberg CS Outline Intrinsic Methods –Transcription Accuracy Word Error Rate Automatic methods, toolkits Limitations –Concept.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Hidden Markov Models David Meir Blei November 1, 1999.
Word classes and part of speech tagging Chapter 5.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Audio Processing for Ubiquitous Computing Uichin Lee KAIST KSE.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Speech and Language Processing
Graphical models for part of speech tagging
Speech and Language Processing
CS 4705 Hidden Markov Models Julia Hirschberg CS4705.
Natural Language Processing Lecture 8—2/5/2015 Susan W. Brown.
Chapter 5. Probabilistic Models of Pronunciation and Spelling 2007 년 05 월 04 일 부산대학교 인공지능연구실 김민호 Text : Speech and Language Processing Page. 141 ~ 189.
LML Speech Recognition Speech Recognition Introduction I E.M. Bakker.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models & POS Tagging Corpora and Statistical Methods Lecture 9.
Hidden Markov Models: Decoding & Training Natural Language Processing CMSC April 24, 2003.
Speech and Language Processing Chapter 9 of SLP Automatic Speech Recognition (II)
CSA3202 Human Language Technology HMMs for POS Tagging.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
Presented by: Fang-Hui Chu Discriminative Models for Speech Recognition M.J.F. Gales Cambridge University Engineering Department 2007.
Hidden Markov Models (HMMs) –probabilistic models for learning patterns in sequences (e.g. DNA, speech, weather, cards...) (2 nd order model)
Albert Gatt Corpora and Statistical Methods. Acknowledgement Some of the examples in this lecture are taken from a tutorial on HMMs by Wolgang Maass.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Discriminative n-gram language modeling Brian Roark, Murat Saraclar, Michael Collins Presented by Patty Liu.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
CS 224S / LINGUIST 285 Spoken Language Processing
Automatic Speech Recognition
Speech Recognition and Synthesis
CSCI 5832 Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Speech Processing Speech Recognition
N-Gram Model Formulas Word sequences Chain rule of probability
Speech Processing Speech Recognition
Lecture 10: Speech Recognition (II) October 28, 2004 Dan Jurafsky
Presentation transcript:

CS 224S / LINGUIST 281 Speech Recognition, Synthesis, and Dialogue Dan Jurafsky Lecture 5: Intro to ASR+HMMs: Forward, Viterbi, Word Error Rate IP Notice:

Outline for Today Speech Recognition Architectural Overview Hidden Markov Models in general and for speech Forward Viterbi Decoding How this fits into the ASR component of course July 27 (today): HMMs, Forward, Viterbi, Jan 29 Baum-Welch (Forward-Backward) Feb 3: Feature Extraction, MFCCs Feb 5: Acoustic Modeling and GMMs Feb 10: N-grams and Language Modeling Feb 24: Search and Advanced Decoding Feb 26: Dealing with Variation Mar 3: Dealing with Disfluencies

LVCSR Large Vocabulary Continuous Speech Recognition ~20,000-64,000 words Speaker independent (vs. speaker-dependent) Continuous speech (vs isolated-word)

Current error rates Task Vocabulary Error Rate% Digits 11 0.5 Ballpark numbers; exact numbers depend very much on the specific corpus Task Vocabulary Error Rate% Digits 11 0.5 WSJ read speech 5K 3 20K Broadcast news 64,000+ 10 Conversational Telephone 20

HSR versus ASR Conclusions: Machines about 5 times worse than humans Task Vocab ASR Hum SR Continuous digits 11 .5 .009 WSJ 1995 clean 5K 3 0.9 WSJ 1995 w/noise 9 1.1 SWBD 2004 65K 20 4 Conclusions: Machines about 5 times worse than humans Gap increases with noisy speech These numbers are rough, take with grain of salt

LVCSR Design Intuition Build a statistical model of the speech-to-words process Collect lots and lots of speech, and transcribe all the words. Train the model on the labeled speech Paradigm: Supervised Machine Learning + Search

The Noisy Channel Model Search through space of all possible sentences. Pick the one that is most probable given the waveform.

The Noisy Channel Model (II) What is the most likely sentence out of all sentences in the language L given some acoustic input O? Treat acoustic input O as sequence of individual observations O = o1,o2,o3,…,ot Define a sentence as a sequence of words: W = w1,w2,w3,…,wn

Noisy Channel Model (III) Probabilistic implication: Pick the highest prob S: We can use Bayes rule to rewrite this: Since denominator is the same for each candidate sentence W, we can ignore it for the argmax:

Speech Recognition Architecture

Noisy channel model likelihood prior

The noisy channel model Ignoring the denominator leaves us with two factors: P(Source) and P(Signal|Source)

Speech Architecture meets Noisy Channel

Architecture: Five easy pieces (only 2 for today) Feature extraction Acoustic Modeling HMMs, Lexicons, and Pronunciation Decoding Language Modeling

Lexicon A list of words Each one with a pronunciation in terms of phones We get these from on-line pronucniation dictionary CMU dictionary: 127K words http://www.speech.cs.cmu.edu/cgi-bin/cmudict We’ll represent the lexicon as an HMM

HMMs for speech

Phones are not homogeneous!

Each phone has 3 subphones

Resulting HMM word model for “six”

HMM for the digit recognition task 4/20/2017 Speech and Language Processing Jurafsky and Martin

More formally: Toward HMMs A weighted finite-state automaton An FSA with probabilities onthe arcs The sum of the probabilities leaving any arc must sum to one A Markov chain (or observable Markov Model) a special case of a WFST in which the input sequence uniquely determines which states the automaton will go through Markov chains can’t represent inherently ambiguous problems Useful for assigning probabilities to unambiguous sequences

Markov chain for weather

Markov chain for words

Markov chain = “First-order observable Markov Model” a set of states Q = q1, q2…qN; the state at time t is qt Transition probabilities: a set of probabilities A = a01a02…an1…ann. Each aij represents the probability of transitioning from state i to state j The set of these is the transition probability matrix A Distinguished start and end states

Markov chain = “First-order observable Markov Model” Current state only depends on previous state

Another representation for start state Instead of start state Special initial probability vector  An initial distribution over probability of start states Constraints:

The weather figure using pi

The weather figure: specific example

Markov chain for weather What is the probability of 4 consecutive warm days? Sequence is warm-warm-warm-warm I.e., state sequence is 3-3-3-3 P(3,3,3,3) = 3a33a33a33a33 = 0.2 x (0.6)3 = 0.0432

How about? Hot hot hot hot Cold hot cold hot What does the difference in these probabilities tell you about the real world weather info encoded in the figure?

HMM for Ice Cream You are a climatologist in the year 2799 Studying global warming You can’t find any records of the weather in Baltimore, MD for summer of 2008 But you find Jason Eisner’s diary Which lists how many ice-creams Jason ate every date that summer Our job: figure out how hot it was

Hidden Markov Model For Markov chains, the output symbols are the same as the states. See hot weather: we’re in state hot But in named-entity or part-of-speech tagging (and speech recognition and other things) The output symbols are words But the hidden states are something else Part-of-speech tags Named entity tags So we need an extension! A Hidden Markov Model is an extension of a Markov chain in which the input symbols are not the same as the states. This means we don’t know which state we are in.

Hidden Markov Models

Assumptions Markov assumption: Output-independence assumption

Eisner task Given Produce: Ice Cream Observation Sequence: 1,2,3,2,2,2,3… Produce: Weather Sequence: H,C,H,H,H,C…

HMM for ice cream

Different types of HMM structure Ergodic = fully-connected Bakis = left-to-right

The Three Basic Problems for HMMs Jack Ferguson at IDA in the 1960s Problem 1 (Evaluation): Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we efficiently compute P(O| ), the probability of the observation sequence, given the model Problem 2 (Decoding): Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we choose a corresponding state sequence Q=(q1q2…qT) that is optimal in some sense (i.e., best explains the observations) Problem 3 (Learning): How do we adjust the model parameters  = (A,B) to maximize P(O|  )?

Problem 1: computing the observation likelihood Given the following HMM: How likely is the sequence 3 1 3?

How to compute likelihood For a Markov chain, we just follow the states 3 1 3 and multiply the probabilities But for an HMM, we don’t know what the states are! So let’s start with a simpler situation. Computing the observation likelihood for a given hidden state sequence Suppose we knew the weather and wanted to predict how much ice cream Jason would eat. I.e. P( 3 1 3 | H H C)

Computing likelihood of 3 1 3 given hidden state sequence

Computing joint probability of observation and state sequence

Computing total likelihood of 3 1 3 We would need to sum over Hot hot cold Hot hot hot Hot cold hot …. How many possible hidden state sequences are there for this sequence? How about in general for an HMM with N hidden states and a sequence of T observations? NT So we can’t just do separate computation for each hidden state sequence.

Instead: the Forward algorithm A kind of dynamic programming algorithm Just like Minimum Edit Distance Uses a table to store intermediate values Idea: Compute the likelihood of the observation sequence By summing over all possible hidden state sequences But doing this efficiently By folding all the sequences into a single trellis

The forward algorithm The goal of the forward algorithm is to compute We’ll do this by recursion

The forward algorithm Each cell of the forward algorithm trellis alphat(j) Represents the probability of being in state j After seeing the first t observations Given the automaton Each cell thus expresses the following probabilty

The Forward Recursion

The Forward Trellis

We update each cell

The Forward Algorithm

Decoding Given an observation sequence And an HMM 3 1 3 And an HMM The task of the decoder To find the best hidden state sequence Given the observation sequence O=(o1o2…oT), and an HMM model  = (A,B), how do we choose a corresponding state sequence Q=(q1q2…qT) that is optimal in some sense (i.e., best explains the observations)

Decoding One possibility: Why not? Instead: For each hidden state sequence Q HHH, HHC, HCH, Compute P(O|Q) Pick the highest one Why not? NT Instead: The Viterbi algorithm Is again a dynamic programming algorithm Uses a similar trellis to the Forward algorithm

Viterbi intuition We want to compute the joint probability of the observation sequence together with the best state sequence

Viterbi Recursion

The Viterbi trellis

Viterbi intuition Process observation sequence left to right Filling out the trellis Each cell:

Viterbi Algorithm

Viterbi backtrace

HMMs for Speech We haven’t yet shown how to learn the A and B matrices for HMMs; we’ll do that on Thursday The Baum-Welch (Forward-Backward alg) But let’s return to think about speech

Reminder: a word looks like this:

HMM for digit recognition task

The Evaluation (forward) problem for speech The observation sequence O is a series of MFCC vectors The hidden states W are the phones and words For a given phone/word string W, our job is to evaluate P(O|W) Intuition: how likely is the input to have been generated by just that word string W

Evaluation for speech: Summing over all different paths! f ay ay ay ay v v v v f f ay ay ay ay v v v f f f f ay ay ay ay v f f ay ay ay ay ay ay v f f ay ay ay ay ay ay ay ay v f f ay v v v v v v v

The forward lattice for “five”

The forward trellis for “five”

Viterbi trellis for “five”

Viterbi trellis for “five”

Search space with bigrams

Viterbi trellis

Viterbi backtrace

Evaluation How to evaluate the word string output by a speech recognizer?

Word Error Rate Word Error Rate = 100 (Insertions+Substitutions + Deletions) ------------------------------ Total Word in Correct Transcript Aligment example: REF: portable **** PHONE UPSTAIRS last night so HYP: portable FORM OF STORES last night so Eval I S S WER = 100 (1+2+0)/6 = 50%

NIST sctk-1.3 scoring softare: Computing WER with sclite http://www.nist.gov/speech/tools/ Sclite aligns a hypothesized text (HYP) (from the recognizer) with a correct or reference text (REF) (human transcribed) id: (2347-b-013) Scores: (#C #S #D #I) 9 3 1 2 REF: was an engineer SO I i was always with **** **** MEN UM and they HYP: was an engineer ** AND i was always with THEM THEY ALL THAT and they Eval: D S I I S S

Sclite output for error analysis CONFUSION PAIRS Total (972) With >= 1 occurances (972) 1: 6 -> (%hesitation) ==> on 2: 6 -> the ==> that 3: 5 -> but ==> that 4: 4 -> a ==> the 5: 4 -> four ==> for 6: 4 -> in ==> and 7: 4 -> there ==> that 8: 3 -> (%hesitation) ==> and 9: 3 -> (%hesitation) ==> the 10: 3 -> (a-) ==> i 11: 3 -> and ==> i 12: 3 -> and ==> in 13: 3 -> are ==> there 14: 3 -> as ==> is 15: 3 -> have ==> that 16: 3 -> is ==> this

Sclite output for error analysis 17: 3 -> it ==> that 18: 3 -> mouse ==> most 19: 3 -> was ==> is 20: 3 -> was ==> this 21: 3 -> you ==> we 22: 2 -> (%hesitation) ==> it 23: 2 -> (%hesitation) ==> that 24: 2 -> (%hesitation) ==> to 25: 2 -> (%hesitation) ==> yeah 26: 2 -> a ==> all 27: 2 -> a ==> know 28: 2 -> a ==> you 29: 2 -> along ==> well 30: 2 -> and ==> it 31: 2 -> and ==> we 32: 2 -> and ==> you 33: 2 -> are ==> i 34: 2 -> are ==> were

Better metrics than WER? WER has been useful But should we be more concerned with meaning (“semantic error rate”)? Good idea, but hard to agree on Has been applied in dialogue systems, where desired semantic output is more clear

Summary: ASR Architecture Five easy pieces: ASR Noisy Channel architecture Feature Extraction: 39 “MFCC” features Acoustic Model: Gaussians for computing p(o|q) Lexicon/Pronunciation Model HMM: what phones can follow each other Language Model N-grams for computing p(wi|wi-1) Decoder Viterbi algorithm: dynamic programming for combining all these to get word sequence from speech!

ASR Lexicon: Markov Models for pronunciation

Summary Speech Recognition Architectural Overview Hidden Markov Models in general Forward Viterbi Decoding Hidden Markov models for Speech Evaluation