Automatic Speech Recognition Introduction. The Human Dialogue System.

Slides:



Advertisements
Similar presentations
Robust Speech recognition V. Barreaud LORIA. Mismatch Between Training and Testing n mismatch influences scores n causes of mismatch u Speech Variation.
Advertisements

Building an ASR using HTK CS4706
Speech Recognition Part 3 Back end processing. Speech recognition simplified block diagram Speech Capture Speech Capture Feature Extraction Feature Extraction.
Acoustic / Lexical Model Derk Geene. Speech recognition  P(words|signal)= P(signal|words) P(words) / P(signal)  P(signal|words): Acoustic model  P(words):
Natural Language Processing - Speech Processing -
The Acoustic/Lexical model: Exploring the phonetic units; Triphones/Senones in action. Ofer M. Shir Speech Recognition Seminar, 15/10/2003 Leiden Institute.
Application of HMMs: Speech recognition “Noisy channel” model of speech.
4/25/2001ECE566 Philip Felber1 Speech Recognition A report of an Isolated Word experiment. By Philip Felber Illinois Institute of Technology April 25,
Part 6 HMM in Practice CSE717, SPRING 2008 CUBS, Univ at Buffalo.
COMP 4060 Natural Language Processing Speech Processing.
Why is ASR Hard? Natural speech is continuous
Automatic Speech Recognition Introduction Readings: Jurafsky & Martin HLT Survey Chapter 1.
Natural Language Understanding
Automatic Continuous Speech Recognition Database speech text Scoring.
Audio Processing for Ubiquitous Computing Uichin Lee KAIST KSE.
Introduction to Automatic Speech Recognition
Isolated-Word Speech Recognition Using Hidden Markov Models
1 7-Speech Recognition (Cont’d) HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training.
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Midterm Review Spoken Language Processing Prof. Andrew Rosenberg.
Speech Recognition with Hidden Markov Models Winter 2011
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Speech and Language Processing
 Feature extractor  Mel-Frequency Cepstral Coefficients (MFCCs) Feature vectors.
7-Speech Recognition Speech Recognition Concepts
Chapter 5. Probabilistic Models of Pronunciation and Spelling 2007 년 05 월 04 일 부산대학교 인공지능연구실 김민호 Text : Speech and Language Processing Page. 141 ~ 189.
A brief overview of Speech Recognition and Spoken Language Processing Advanced NLP Guest Lecture August 31 Andrew Rosenberg.
By: Meghal Bhatt.  Sphinx4 is a state of the art speaker independent, continuous speech recognition system written entirely in java programming language.
Csc Lecture 7 Recognizing speech. Geoffrey Hinton.
Machine Translation  Machine translation is of one of the earliest uses of AI  Two approaches:  Traditional approach using grammars, rewrite rules,
Modeling Speech using POMDPs In this work we apply a new model, POMPD, in place of the traditional HMM to acoustically model the speech signal. We use.
LML Speech Recognition Speech Recognition Introduction I E.M. Bakker.
22CS 338: Graphical User Interfaces. Dario Salvucci, Drexel University. Lecture 10: Advanced Input.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Automatic Speech Recognition: Conditional Random Fields for ASR Jeremy Morris Eric Fosler-Lussier Ray Slyh 9/19/2008.
8.0 Search Algorithms for Speech Recognition References: of Huang, or of Becchetti, or , of Jelinek 4. “ Progress.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models & POS Tagging Corpora and Statistical Methods Lecture 9.
PhD Candidate: Tao Ma Advised by: Dr. Joseph Picone Institute for Signal and Information Processing (ISIP) Mississippi State University Linear Dynamic.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
Speech Recognition with CMU Sphinx Srikar Nadipally Hareesh Lingareddy.
Combining Speech Attributes for Speech Recognition Jeremy Morris November 9, 2006.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
Performance Comparison of Speaker and Emotion Recognition
ICASSP 2007 Robustness Techniques Survey Presenter: Shih-Hsiang Lin.
BY KALP SHAH Sentence Recognizer. Sphinx4 Sphinx4 is the best and versatile recognition system. Sphinx4 is a speech recognition system which is written.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
Phone-Level Pronunciation Scoring and Assessment for Interactive Language Learning Speech Communication, 2000 Authors: S. M. Witt, S. J. Young Presenter:
Speech Recognition Created By : Kanjariya Hardik G.
Message Source Linguistic Channel Articulatory Channel Acoustic Channel Observable: MessageWordsSounds Features Bayesian formulation for speech recognition:
1 7-Speech Recognition Speech Recognition Concepts Speech Recognition Approaches Recognition Theories Bayse Rule Simple Language Model P(A|W) Network Types.
Discriminative n-gram language modeling Brian Roark, Murat Saraclar, Michael Collins Presented by Patty Liu.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Automatic Speech Recognition
Automatic Speech Recognition Introduction
Statistical Models for Automatic Speech Recognition
Speech Processing Speech Recognition
Automatic Speech Recognition Introduction
EEG Recognition Using The Kaldi Speech Recognition Toolkit
Statistical Models for Automatic Speech Recognition
Lecture 10: Speech Recognition (II) October 28, 2004 Dan Jurafsky
LECTURE 15: REESTIMATION, EM AND MIXTURES
Automatic Speech Recognition
Speech recognition, machine learning
Automatic Speech Recognition
Speech recognition, machine learning
Presentation transcript:

Automatic Speech Recognition Introduction

The Human Dialogue System

Computer Dialogue Systems Audition Automatic Speech Recognition Natural Language Understanding Dialogue Management Planning Natural Language Generation Text-to- speech signalwords logical form wordssignal

Computer Dialogue Systems AuditionASRNLU Dialogue Mgmt. Planning NLG Text-to- speech signalwords logical form wordssignal

Parameters of ASR Capabilities Different types of tasks with different difficulties –Speaking mode (isolated words/continuous speech) –Speaking style (read/spontaneous) –Enrollment (speaker-independent/dependent) –Vocabulary (small 20kword) –Language model (finite state/context sensitive) –Signal-to-noise ratio (high > 30 dB/low < 10dB) –Transducer (high quality microphone/telephone)

The Noisy Channel Model (Shannon) message Message noisy channel Channel + message =Signal Decoding model: find Message*= argmax P(Message|Signal) But how do we represent each of these things?

What are the basic units for acoustic information? When selecting the basic unit of acoustic information, we want it to be accurate, trainable and generalizable. Words are good units for small-vocabulary SR – but not a good choice for large-vocabulary & continuous SR: Each word is treated individually –which implies large amount of training data and storage. The recognition vocabulary may consist of words which have never been given in the training data. Expensive to model interword coarticulation effects.

Why phones are better units than words: an example

"SAY BITE AGAIN""SAY BITE AGAIN" spoken so that the phonemes are separated in time Recorded sound spectrogram

"SAY BITE AGAIN""SAY BITE AGAIN" spoken normally

And why phones are still not the perfect choice Phonemes are more trainable (there are only about 50 phonemes in English, for example) and generalizable (vocabulary independent). However, each word is not a sequence of independent phonemes! Our articulators move continuously from one position to another. The realization of a particular phoneme is affected by its phonetic neighbourhood, as well as by local stress effects etc. Different realizations of a phoneme are called allophones.

Example: different spectrograms for “eh”

Triphone model Each triphone captures facts about preceding and following phone Monophone: p, t, k Triphone: iy-p+aa a-b+c means “phone b, preceding by phone a, followed by phone c” In practice, systems use order of 100,000 3phones, and the 3phone model is the one currently used (e.g. Sphynx)

Parts of an ASR System Feature Calculation Language Modeling Acoustic Modeling Pronunciation Modeling cat: dog: dog mail: mAl the: D&, DE … cat dog: cat the: the cat: the dog: the mail: … Produces acoustic vectors (x t ) Maps acoustics to 3phones Maps 3phones to words Strings words together

Feature calculation interpretations

Feature calculation Frequency Time Find energy at each time step in each frequency channel

Feature calculation Frequency Time Take Inverse Discrete Fourier Transform to decorrelate frequencies

Feature calculation … … … … … Input: Output: acoustic observations vectors

Robust Speech Recognition Different schemes have been developed for dealing with noise, reverberation –Additive noise: reduce effects of particular frequencies –Convolutional noise: remove effects of linear filters (cepstral mean subtraction) cepstrum: fourier transfor of the LOGARITHM of the spectrum

How do we map from vectors to word sequences? … … … … “That you” … ???

HMM (again)! … … … … “That you” … Pattern recognition with HMMs

ASR using HMMs Try to solve P(Message|Signal) by breaking the problem up into separate components Most common method: Hidden Markov Models –Assume that a message is composed of words –Assume that words are composed of sub-word parts (3phones) –Assume that 3phones have some sort of acoustic realization –Use probabilistic models for matching acoustics to phones to words

Creating HMMs for word sequences: Context independent units 3phones

“Need” 3phone model

Hierarchical system of HMMs HMM of a triphone Higher level HMM of a word Language model

To simplify, let’s now ignore lower level HMM Each phone node has a “hidden” HMM (H 2 MM)

HMMs for ASR gohome gohom x0x0 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 x9x9 Markov model backbone composed of sequences of 3phones (hidden because we don ’ t know correspondences) Acoustic observations Each line represents a probability estimate (more later) goooooohmm

HMMs for ASR gohome gohom x0x0 x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 x9x9 Markov model backbone composed of phones (hidden because we don ’ t know correspondences) Acoustic observations Even with same word hypothesis, can have different alignments (red arrows). Also, have to search over all word hypotheses Even with same word hypothesis, can have different alignments (red arrows). Also, have to search over all word hypotheses

For every HMM (in hierarchy): compute Max probability sequence tha t hiy yuw p(he|that) p(you|that) h iy shuh d X= acoustic observations, (3)phones, phone sequences W= (3)phones, phone sequences, word sequences argmax W P(W|X) =argmax W P(X|W)P(W)/P(X) =argmax W P(X|W)P(W) COMPUTE:

Search When trying to find W*=argmax W P(W|X), need to look at (in theory) –All possible (3phone, word.. etc) sequences –All possible segmentations/alignments of W&X Generally, this is done by searching the space of W –Viterbi search: dynamic programming approach that looks for the most likely path –A* search: alternative method that keeps a stack of hypotheses around If |W| is large, pruning becomes important Need also to estimate transition probabilities

Training: speech corpora Have a speech corpus at hand –Should have word (and preferrably phone) transcriptions –Divide into training, development, and test sets Develop models of prior knowledge –Pronunciation dictionary –Grammar, lexical trees Train acoustic models –Possibly realigning corpus phonetically

Acoustic Model … … … … dhaat Assume that you can label each vector with a phonetic label Collect all of the examples of a phone together and build a Gaussian model (or some other statistical model, e.g. neural networks) N a (  ) P(X|state=a)

Pronunciation model Pronunciation model gives connections between phones and words Multiple pronunciations (tomato): ow tm dh p dh 1-p dh a papa 1-p a t ptpt 1-p t ah ow ey ah t

Training models for a sound unit

Language Model Language model gives connections between words (e.g., bigrams: probability of two word sequences) dha t hiyyuw p(he|that) p(you|that)

Lexical trees STARTS-T-AA-R-TD STARTINGS-T-AA-R-DX-IX-NG STARTEDS-T-AA-R-DX-IX-DD STARTUPS-T-AA-R-T-AX-PD START-UPS-T-AA-R-T-AX-PD STAA R RT TD DX IX NG DD AX PD start starting started startup start-up

Judging the quality of a system Usually, ASR performance is judged by the word error rate ErrorRate = 100*(Subs + Ins + Dels) / Nwords REF: I WANT TO GO HOME *** REC: * WANT TWO GO HOME NOW SC: D C S C C I 100*(1S+1I+1D)/5 = 60%

Judging the quality of a system Usually, ASR performance is judged by the word error rate This assumes that all errors are equal –Also, a bit of a mismatch between optimization criterion and error measurement Other (task specific) measures sometimes used –Task completion –Concept error rate

Sphinx4

Sphinx4 Implementation

Frontend Feature extractor

Frontend Feature extractor Mel-Frequency Cepstral Coefficients (MFCCs) Feature vectors

Hidden Markov Models (HMMs) Acoustic Observations

Hidden Markov Models (HMMs) Acoustic Observations Hidden States

Hidden Markov Models (HMMs) Acoustic Observations Hidden States Acoustic Observation likelihoods

Hidden Markov Models (HMMs) “Six”

Sphinx4 Implementation

Linguist Constructs the search graph of HMMs from: –Acoustic model –Statistical Language model ~or~ –Grammar –Dictionary

Acoustic Model Constructs the HMMs of phones Produces observation likelihoods

Acoustic Model Constructs the HMMs for units of speech Produces observation likelihoods Sampling rate is critical! WSJ vs. WSJ_8k

Acoustic Model Constructs the HMMs for units of speech Produces observation likelihoods Sampling rate is critical! WSJ vs. WSJ_8k TIDIGITS, RM1, AN4, HUB4

Language Model Word likelihoods

Language Model ARPA format Example: 1-grams: board bottom bunch grams: as the at all at the grams: in the lowest in the middle in the on

Grammar (example: command language) public = ; public = (please | kindly | could you ) *; public = [ please | thanks | thank you ]; = ; = (open | close | delete | move); = [the | a] (window | file | menu);

Dictionary Maps words to phoneme sequences

Dictionary Example from cmudict.06d POULTICE P OW L T AH S POULTICES P OW L T AH S IH Z POULTON P AW L T AH N POULTRY P OW L T R IY POUNCE P AW N S POUNCED P AW N S T POUNCEY P AW N S IY POUNCING P AW N S IH NG POUNCY P UW NG K IY

Sphinx4 Implementation

Search Graph

Can be statically or dynamically constructed

Sphinx4 Implementation

Decoder Maps feature vectors to search graph

Search Manager Searches the graph for the “best fit”

Search Manager Searches the graph for the “best fit” P(sequence of feature vectors| word/phone) aka. P(O|W) -> “how likely is the input to have been generated by the word”

F ay ay ay ay v v v v v F f ay ay ay ay v v v v F f f ay ay ay ay v v v F f f f ay ay ay ay v v F f f f ay ay ay ay ay v F f f f f ay ay ay ay v F f f f f f ay ay ay v …

Viterbi Search Time O1O2O3

Pruner Uses algorithms to weed out low scoring paths during decoding

Result Words!

Word Error Rate Most common metric Measure the # of modifications to transform recognized sentence into reference sentence

Word Error Rate Reference: “This is a reference sentence.” Result: “This is neuroscience.”

Word Error Rate Reference: “This is a reference sentence.” Result: “This is neuroscience.” Requires 2 deletions, 1 substitution

Word Error Rate Reference: “This is a reference sentence.” Result: “This is neuroscience.”

Word Error Rate Reference: “This is a reference sentence.” Result: “This is neuroscience.” D S D

Installation details x4:howtobuildand_run_sphinx4http://cmusphinx.sourceforge.net/wiki/sphin x4:howtobuildand_run_sphinx4 Student report on NLP course web site