Download presentation
Presentation is loading. Please wait.
1
Automatic Speech Recognition Introduction
2
The Human Dialogue System
3
The Human Dialogue System
4
Computer Dialogue Systems
Management Audition Automatic Speech Recognition Natural Language Understanding Natural Language Generation Text-to- speech Planning signal signal words words signal logical form
5
Computer Dialogue Systems
Mgmt. Audition ASR NLU NLG Text-to- speech Planning signal signal words words signal logical form
6
Parameters of ASR Capabilities
Different types of tasks with different difficulties Speaking mode (isolated words/continuous speech) Speaking style (read/spontaneous) Enrollment (speaker-independent/dependent) Vocabulary (small < 20 wd/large >20kword) Language model (finite state/context sensitive) Signal-to-noise ratio (high > 30 dB/low < 10dB) Transducer (high quality microphone/telephone)
7
The Noisy Channel Model (Shannon)
message message =Signal noisy channel Channel + Message Decoding model: find Message*= argmax P(Message|Signal) But how do we represent each of these things?
8
What are the basic units for acoustic information?
When selecting the basic unit of acoustic information, we want it to be accurate, trainable and generalizable. Words are good units for small-vocabulary SR – but not a good choice for large-vocabulary & continuous SR: Each word is treated individually –which implies large amount of training data and storage. The recognition vocabulary may consist of words which have never been given in the training data. Expensive to model interword coarticulation effects.
9
Why phones are better units than words: an example
10
"SAY BITE AGAIN" spoken so that the phonemes are separated in time
Recorded sound spectrogram
11
"SAY BITE AGAIN" spoken normally
12
And why phones are still not the perfect choice
Phonemes are more trainable (there are only about 50 phonemes in English, for example) and generalizable (vocabulary independent). However, each word is not a sequence of independent phonemes! Our articulators move continuously from one position to another. The realization of a particular phoneme is affected by its phonetic neighbourhood, as well as by local stress effects etc. Different realizations of a phoneme are called allophones.
13
Example: different spectrograms for “eh”
14
Triphone model Each triphone captures facts about preceding and following phone Monophone: p, t, k Triphone: iy-p+aa a-b+c means “phone b, preceding by phone a, followed by phone c” In practice, systems use order of 100,000 3phones, and the 3phone model is the one currently used (e.g. Sphynx)
15
Parts of an ASR System Feature Calculation Acoustic Modeling
Pronunciation Modeling cat: dog: dog mail: mAl the: D&, DE … Language Modeling cat dog: cat the: the cat: 0.029 the dog: 0.031 the mail: 0.054 … k @ Produces acoustic vectors (xt) Maps acoustics to 3phones Maps 3phones to words Strings words together
16
Feature calculation interpretations
17
Feature calculation Frequency Time Find energy at each time step in
each frequency channel
18
Feature calculation Frequency Time Take Inverse Discrete Fourier
Transform to decorrelate frequencies
19
Feature calculation Input: Output: acoustic observations … vectors
-0.1 0.3 1.4 -1.2 2.3 2.6 … 0.2 0.1 1.2 -1.2 4.4 2.2 … 0.2 0.0 1.2 -1.2 4.4 2.2 … -6.1 -2.1 3.1 2.4 1.0 2.2 … Output: acoustic observations vectors …
20
Robust Speech Recognition
Different schemes have been developed for dealing with noise, reverberation Additive noise: reduce effects of particular frequencies Convolutional noise: remove effects of linear filters (cepstral mean subtraction) cepstrum: fourier transfor of the LOGARITHM of the spectrum
21
How do we map from vectors to word sequences?
-0.1 0.3 1.4 -1.2 2.3 2.6 … 0.2 0.1 1.2 -1.2 4.4 2.2 … 0.2 0.0 1.2 -1.2 4.4 2.2 … -6.1 -2.1 3.1 2.4 1.0 2.2 … ??? “That you” …
22
HMM (again)! Pattern recognition “That you” … with HMMs -0.1 0.3 1.4
-1.2 2.3 2.6 … 0.2 0.1 1.2 -1.2 4.4 2.2 … 0.2 0.0 1.2 -1.2 4.4 2.2 … -6.1 -2.1 3.1 2.4 1.0 2.2 … Pattern recognition “That you” … with HMMs
23
ASR using HMMs Try to solve P(Message|Signal) by breaking the problem up into separate components Most common method: Hidden Markov Models Assume that a message is composed of words Assume that words are composed of sub-word parts (3phones) Assume that 3phones have some sort of acoustic realization Use probabilistic models for matching acoustics to phones to words
24
Creating HMMs for word sequences: Context independent units
3phones
25
“Need” 3phone model
26
Hierarchical system of HMMs
HMM of a triphone HMM of a triphone HMM of a triphone Higher level HMM of a word Language model
27
To simplify, let’s now ignore lower level HMM
Each phone node has a “hidden” HMM (H2MM)
28
HMMs for ASR go home Markov model backbone composed
of sequences of 3phones (hidden because we don’t know correspondences) g o h o m g o h o m m x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 Acoustic observations Each line represents a probability estimate (more later)
29
HMMs for ASR go home Markov model backbone composed of phones
(hidden because we don’t know correspondences) g o h o m x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 Acoustic observations Even with same word hypothesis, can have different alignments (red arrows). Also, have to search over all word hypotheses
30
For every HMM (in hierarchy): compute Max probability sequence
th a t h iy y uw p(he|that) p(you|that) sh uh d X= acoustic observations, (3)phones, phone sequences W= (3)phones, phone sequences, word sequences argmaxW P(W|X) =argmaxW P(X|W)P(W)/P(X) =argmaxW P(X|W)P(W) COMPUTE:
31
Search When trying to find W*=argmaxW P(W|X), need to look at (in theory) All possible (3phone, word.. etc) sequences All possible segmentations/alignments of W&X Generally, this is done by searching the space of W Viterbi search: dynamic programming approach that looks for the most likely path A* search: alternative method that keeps a stack of hypotheses around If |W| is large, pruning becomes important Need also to estimate transition probabilities
32
Training: speech corpora
Have a speech corpus at hand Should have word (and preferrably phone) transcriptions Divide into training, development, and test sets Develop models of prior knowledge Pronunciation dictionary Grammar, lexical trees Train acoustic models Possibly realigning corpus phonetically
33
Acoustic Model -0.1 0.3 1.4 -1.2 2.3 2.6 … 0.2 0.1 1.2 4.4 2.2 -6.1 -2.1 3.1 2.4 1.0 0.0 dh a t Assume that you can label each vector with a phonetic label Collect all of the examples of a phone together and build a Gaussian model (or some other statistical model, e.g. neural networks) Na(m,S) P(X|state=a)
34
Pronunciation model Pronunciation model gives connections between phones and words Multiple pronunciations (tomato): dh pdh 1-pdh a pa 1-pa t pt 1-pt t ow ow ey t m ah ah
35
Training models for a sound unit
36
Language Model Language model gives connections between words (e.g., bigrams: probability of two word sequences) h iy dh a p(he|that) t y uw p(you|that)
37
Lexical trees START S-T-AA-R-TD STARTING S-T-AA-R-DX-IX-NG STARTED S-T-AA-R-DX-IX-DD STARTUP S-T-AA-R-T-AX-PD START-UP S-T-AA-R-T-AX-PD S T AA R TD DX IX NG DD AX PD start starting started startup start-up
38
Judging the quality of a system
Usually, ASR performance is judged by the word error rate ErrorRate = 100*(Subs + Ins + Dels) / Nwords REF: I WANT TO GO HOME *** REC: * WANT TWO GO HOME NOW SC: D C S C C I 100*(1S+1I+1D)/5 = 60%
39
Judging the quality of a system
Usually, ASR performance is judged by the word error rate This assumes that all errors are equal Also, a bit of a mismatch between optimization criterion and error measurement Other (task specific) measures sometimes used Task completion Concept error rate
40
Sphinx4 http://cmusphinx.sourceforge.net
This will be a practical intro to understanding speech recognition focused on the interface of Sphinx
41
Sphinx4 Implementation
Basic flow chart of how the components fit together
42
Sphinx4 Implementation
Basic flow chart of how the components fit together
43
Frontend Feature extractor
Frontend is the first component of the system to see the data It does signal processing to enhance the signal and extracts features
44
Frontend Feature extractor Mel-Frequency Cepstral Coefficients (MFCCs)
Feature vectors -Different formats exist but MFCC common The important point is that it is a way of transforming an analog signal into digital feature vectors of 39 numbers representing phonetic sounds - Observations are taken every 10 ms -> 100 feature vectors a second ch. 9.3 in Jurafsky&Martin has a nice description of the process
45
Hidden Markov Models (HMMs)
Acoustic Observations HMMs used for speech recog. Have 3 main components We have observations the feature vectors,
46
Hidden Markov Models (HMMs)
Acoustic Observations Hidden States Hidden states phones, partial phones and words, which we are trying to figure out
47
Hidden Markov Models (HMMs)
Acoustic Observations Hidden States Acoustic Observation likelihoods Observation likelihoods the probability of a feature vector being generated by a hidden state (phone, etc.) P(features | phone)
48
Hidden Markov Models (HMMs)
“Six” -HMMs are finite state machines and we can depict them graphically -Consists of emitting states like S1 plus start and end states -Transitions b/t states are weighted by their probability -They flow Left to right, state can transition to self or forward (captures sequential nature of speech) -Phones can vary in pronunciation length widely, self-loops allow accounting for this variable (left -> right flowing HMM is called Bakis network)
49
Sphinx4 Implementation
the linguist generates search graph
50
Linguist Constructs the search graph of HMMs from: Acoustic model
Statistical Language model ~or~ Grammar Dictionary Language model or grammar must contain same words as dictionary Acoustic model must contain same phone set as dictionary
51
Acoustic Model Constructs the HMMs of phones
Produces observation likelihoods Contains the acoustic info Construct the HMMs for phones just described Use Probability Density Functions and Gaussian Mixtures to create flexible models of phonetic sounds which are then used to compute P(observation | phone) observation likelihoods For more on PDfs and Gaussian methods see Jurafsky&Martin ch
52
Acoustic Model Constructs the HMMs for units of speech
Produces observation likelihoods Sampling rate is critical! WSJ vs. WSJ_8k All models are marked with a sampling rate Sampling rate is very important! And must match what is in the application Ie. You can’t train on 16k and use in the wild with 8k data. You will get horrible results
53
Acoustic Model Constructs the HMMs for units of speech
Produces observation likelihoods Sampling rate is critical! WSJ vs. WSJ_8k TIDIGITS, RM1, AN4, HUB4 Creating acoustic models is a lot of work so usually we use ones that are available Different models are available trained on different vocabularies, sampling rates, languages etc. Can be found in sphinx4 in /models/acoustic read the readmes in their folders for details
54
Language Model Word likelihoods
contains information about how likely certain words are to occur
55
Language Model ARPA format Example: 1-grams: -3.7839 board -0.1552
bottom bunch 2-grams: as the at all at the 3-grams: in the lowest in the middle in the on Common format is ARPA and can be produced using CMU-Cambridge Statistical Language Modeling Toolkit Commonly contains tables of probabilities of 1,2 and 3 grams Ngrams are listed 1 per line preceded by log of conditional prob. Followed by log of backoff weight ONLY for those N-grams that form a prefix of longer N-grams in the model. #s are neg. because they are logarithms base 10 of the probabilities of very small #s See Jurafsky&Martin p. 313
56
Grammar (example: command language)
public <basicCmd> = <startPolite> <command> <endPolite>; public <startPolite> = (please | kindly | could you ) *; public <endPolite> = [ please | thanks | thank you ]; <command> = <action> <object>; <action> = (open | close | delete | move); <object> = [the | a] (window | file | menu); A away of specifying what words can be used and how Java Speech API Grammar Format (JSGF) Alternative to a statistical language model like ngrams * = 0 or many, [] = optional () = grouping | = or
57
Dictionary Maps words to phoneme sequences
Defines what words will be available for recognition Maps these words to phoneme sequences which are then used in creating HMMs
58
Dictionary Example from cmudict.06d POULTICE P OW L T AH S
POULTICES P OW L T AH S IH Z POULTON P AW L T AH N POULTRY P OW L T R IY POUNCE P AW N S POUNCED P AW N S T POUNCEY P AW N S IY POUNCING P AW N S IH NG POUNCY P UW NG K IY Defines what words will be available for recognition Cmu pronouncing dictionary is widely used Contains over 100,000 words and their transcriptions.
59
Sphinx4 Implementation
The SearchManager uses the Features and the SearchGraph to find the best fit path
60
Search Graph We can represent a partial diagram of the digit recognition task like so
61
Search Graph Another representation of the same idea
62
Search Graph Can be statically or dynamically constructed
The entire search graph for the model can be computed ahead of time for small vocab tasks For larger applications likely a partial search graph would be constructed ahead of time, dynamically expanded at runtime
63
Sphinx4 Implementation
Then comes the decoder which constructs the search manager
64
Decoder Maps feature vectors to search graph
Job of the decoder is to use feature vectors from the frontend in conjunction with the search graph generated by the linguist to generate a result
65
Search Manager Searches the graph for the “best fit”
The decoder calls the search manager to search the graph for the best fit
66
Search Manager Searches the graph for the “best fit”
P(sequence of feature vectors| word/phone) aka. P(O|W) -> “how likely is the input to have been generated by the word” For a given word or phone we want to determine the P(sequence of feature vectors | word or phone)
67
F ay ay ay ay v v v v v F f ay ay ay ay v v v v F f f ay ay ay ay v v v F f f f ay ay ay ay v v F f f f ay ay ay ay ay v F f f f f ay ay ay ay v F f f f f f ay ay ay v … We could calculate every possible probability for a given word given a set of observations but this would take exponential time to solve
68
Viterbi Search Time O1 O2 O3
-The search manager commonly uses the Viterbi algorithm, -a form of Optimized graph search (heuristic) for finding the most likely sequence of hidden states (phones) in a HMM based on a sequence of observations over time More in the appendix in these slides Time O1 O2 O3
69
Pruner Uses algorithms to weed out low scoring paths during decoding
-Viterbi is more efficient than brute force but still can be slow on large search graphs -the search manager often uses a pruner to narrow possible paths and speed up search -Commonly prunes based on an absolute max # of paths or a threshold of probability relative to the currently most probable path
70
Result Words! Finally, the result is the words contained in the best fit path through the search graph
71
Word Error Rate Most common metric
Measure the # of modifications to transform recognized sentence into reference sentence -Measure of recognition accuracy, more specifically -the measure of the number of modification operations required to transform one sentence to the other in terms of number of insertions, deletions and substitutions.
72
Word Error Rate Reference: “This is a reference sentence.”
Result: “This is neuroscience.” the measure of the number of modification operations required to transform one sentence to the other in terms of number of insertions, deletions and substitutions.
73
Word Error Rate Reference: “This is a reference sentence.”
Result: “This is neuroscience.” Requires 2 deletions, 1 substitution Errors: 1 deletion (a) 1 deletion (sentence) 1 substitution (reference for neuroscience)
74
Word Error Rate Reference: “This is a reference sentence.”
Result: “This is neuroscience.”
75
Word Error Rate Reference: “This is a reference sentence.”
Result: “This is neuroscience.” D S D 2 deletions + 1 sub / length of 5 = .6 * 100 = 60%
76
Installation details Student report on NLP course web site
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.