Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CS 552/652 Speech Recognition with Hidden Markov Models Spring 2010 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.

Similar presentations


Presentation on theme: "1 CS 552/652 Speech Recognition with Hidden Markov Models Spring 2010 Oregon Health & Science University Center for Spoken Language Understanding John-Paul."— Presentation transcript:

1 1 CS 552/652 Speech Recognition with Hidden Markov Models Spring 2010 Oregon Health & Science University Center for Spoken Language Understanding John-Paul Hosom Lecture 2 March 31 Induction, DTW, and Issues in ASR

2 2 Induction & Dynamic Programming Induction (from Floyd & Beigel, The Language of Machines, pp. 39-66) Technique for proving theorems, used in Hidden Markov Models to guarantee “optimal” results. Understand induction by doing example proofs… Suppose P(n) is statement about number n, and we want to prove P(n) is true for all n  0. Inductive proof: Show both of the following: Base case: P(0) is true Induction: (  n  0) [P(n)  P(n+1)] In the inductive case, we want to show that if (assuming) P is true for n, then it must be true for n+1. We never prove P is true for any specific value of n other than 0. If both cases are shown, then P(n) is true for all n  0.

3 3 Induction & Dynamic Programming Example: Prove that Step 1: Prove base case: Step 2: Prove the inductive case (if true for n, true for n+1): show if then Step 2a: assume that is true for some fixed value of n. for n  0 (In other words, show that if true for n, then true for n+1)

4 4 we have now showed what we wanted to show at beginning of Step 2. Induction & Dynamic Programming Step 2b: extend equation to next value for n: (from 2a) We proved case for (n+1), assuming that case for n is true. If we look at base case (n=0), we can show truth for n=0. Given that case for n=0 is true, then case for n=1 is true. Given that case for n=1 is true, then case for n=2 is true. (etc.) By proving base case and inductive step, we prove  n  0. (from definition of ) (algebra)

5 5 Induction & Dynamic Programming Dynamic Programming or Inductive technique: “The term was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he had refined this to the modern meaning, which refers specifically to nesting smaller decision problems inside larger decisions” 1 Two approaches: 1.Top Down: Start out by trying to compute final answer. To do that, need to solve sub-problems. Each sub-problem requires solving additional sub-problems, until reaching the “base case”. The process can be sped up by storing (“memoizing”) answers to sub-problems. 2.Bottom Up: Start out by solving base cases, then “building” upon those results to solve sub-problems, until reach final answer. 1 From Wikipedia article on Dynamic Programming; emphasis mine.

6 6 Induction & Dynamic Programming Example: Compute Fibonacci sequence {0 1 1 2 3 5 8 13 21 …} where F(n)=F(n-1)+F(n-2) Top-Down pseudocode: memoized[0] = 0 memoized[1] = 1 maxMemoized = 1 int fibonacci(n) { if (n <= maxMemoized) return(memoized[n]) else { f = fibonacci(n-1) + fibonacci(n-2) memoized[n] = f maxMemoized = n } 1 Based on Wikipedia article on Dynamic Programming

7 7 Induction & Dynamic Programming Example: Compute Fibonacci sequence {0 1 1 2 3 5 8 13 21 …} where F(n)=F(n-1)+F(n-2) Bottom-Up pseudocode: int fibonacci(n) { if (n == 0) return(0) if (n == 1) return(1) fAtNMinusTwo = 0 fAtNMinusOne = 1 for (idx = 2; idx <= n; idx++) { f = fAtNMinusTwo + fAtNMinusOne fAtNMinusTwo = fAtNMinusOne fAtNMinusOne = f } return(f) } 1 Based on Wikipedia article on Dynamic Programming We will be using the Bottom-Up approach in this class

8 8 “Greedy Algorithm”: Make a locally-optimum choice going forward at each step, hoping (but not guaranteeing) that the globally-optimum will be found at the last step. Example: Travelling Salesman Problem: Given a number of cities, what is the shortest route that visits each city exactly once and then returns to the starting city? 21 Induction & Dynamic Programming Vancouver Salem Bend Gresham Hillsboro 26 55 132 146 183 167 53 58 35 GA: 26+21+58+132+183=420 Better solution: 26+21+146+132+53=378

9 9 Induction & Dynamic Programming Exhaustive solution: compute distance of all possible routes, and select the shortest. Time required is O(n!) where n is the number of cities. With even moderate values of n, solution is impractical. Greedy Algorithm solution: At each city, the next city to visit is the unvisited city nearest to the current city. This process does not guarantee that the globally-optimum solution will be found, but is a fast solution O(n 2 ). Dynamic-Programming solution: Does guarantee that the globally-optimum solution will be found, because it relies on induction. For Travelling Salesman problem, the solution 1 is O(n 2 2 (n-1) ). For speech problems, the dynamic-programming solution is O(n 2 T) where n is the number of states (not used in DTW but used in HMMs) and T is the number of time frames. 1 Bellman, R. “Dynamic Programming Treatment of the Travelling Salesman Problem,” in Journal of the ACM (JACM), vol. 9, no. 1, January 1962, pp. 61 – 63.

10 10 Dynamic Time Warping (DTW) Goal: Given two utterances, find “best” alignment between pairs of frames from each utterance. (A) (B) time (frame) of (A) time (frame) of (B) The path through this matrix shows the best pairing of frames from utterance A with utterance B: This path can be considered the best warping between A and B.

11 11 Dynamic Time Warping (DTW) Dynamic Time Warping  Requires measure of “distance” between 2 frames of speech, one frame from utterance A and one from utterance B.  Requires heuristics about allowable transitions from one frame in A to another frame in A (and likewise for B).  Uses dynamic programming algorithm to find best warping.  Can get total “distortion score” for best warped path. Distance:  Measure of dissimilarity of two frames of speech Heuristics:  Constrain begin and end times to be (1,1) and (T A,T B )  Allow only monotonically increasing time  Don’t allow too many frames to be skipped  Can express in terms of “paths” with “slope weights”

12 12 Dynamic Time Warping (DTW) Does not require that both patterns have the same length We may refer to one speech pattern as the “input” and the other speech pattern as the “template”, and compare input with template. For speech, we divide speech signal into equally-spaced frames (e.g. 10 msec) and compute one set of features per frame. The local distance measure is the distance between features at a pair of frames (one from A, one from B). Local distance between frames called d. Global distortion from beginning of utterance until current pair of frames called D. DTW can also be applied to related speech problems, such as matching up two similar sequences of phonemes. Algorithm: Similar in some respects to Viterbi search, which will be covered later

13 13 P1P1 P2P2 P3P3 P1=(1,0) P2=(1,1) P3=(1,2) Dynamic Time Warping (DTW) Heuristics: Path P and slope weight m determined heuristically Paths considered backward from target frame Larger weight values for less preferable paths Paths always go up, right (monotonically increasing in time) Only evaluate P if all frames have meaningful values (e.g. don’t evaluate a path if one frame is at time  1, because there is no data for time  1). P1=(1,1)(1,0) P2=(1,1) P3=(1,1)(0,1) P1P1 P2P2 P3P3 1 ½ ½ ½ ½ Heuristic 1 Heuristic 2

14 14 Dynamic Time Warping (DTW) Algorithm: 1. Initialization (time 1 is first time frame) D(1,1) = d(1,1) 2. Recursion 3. Termination (  =zeta) M sometimes defined as T x, or T x +T y, or (T x 2 + T y 2 ) ½

15 15 Dynamic Time Warping (DTW) Example: 1 1 2 2 3 3 1 1 1 2 2 2 3 1 2 2 3 3 3 1 2 1 1 2 3 2 1 2 1 2 3 3 3 2 1 2 P1=(1,0) P2=(1,1) P3=(1,2) 1 1 1 heuristic paths: begin at (1,1), end at (6,6) 1 2 5 8 11 14 2 3 4 6 9 2 4 5 5 8 4 4 6 7 5 5 5 6 6 6 7 D(1,1) = D(2,1) = D(3,1) = D(4,1) = D(1,2) =D(2,2) = D(3,2) = D(4,2) = D(2,3) =D(3,3) = D(4,3) = D(5,3) = D(3,4) =D(4,4) = D(5,4) = D(6,4) = D(3,5) =D(4,5) = D(5,5) = D(6,5) = D(3,6) =D(4,6) = D(5,6) = D(6,6) = normalized distortion = 7/6 = 1.16

16 16 Dynamic Time Warping (DTW) Can we do local look-ahead to speed up process? For example, at (1,1) we know that there are 3 possible points to go to ((2,1), (2,2), (2,3)). Can we compute the cumulative distortion for those 3 points, select the minimum, (e.g. (2,2)), and proceed only from that best point? No, because (global) end-point constraint (end at (6,6)) may alter the path. We can’t make local decisions with a global constraint. In addition, we can’t do this because often there are many ways to end up at a single point, and we don’t know all the ways of getting to a point until we visit it and compute it’s cumulative distortion. This look-ahead transforms DTW from dynamic-programming to greedy algorithm.

17 17 Dynamic Time Warping (DTW) Example: 1 2 2 2 3 3 2 1 8 2 2 2 3 8 9 2 3 3 3 1 2 1 1 2 3 2 1 2 1 2 3 3 3 2 1 2 P1=(1,0) P2=(1,1) P3=(0,1) 1 1 heuristic paths: begin at (1,1), end at (6,6) 1 3 6 9 2 10 7 11 9 9 10 D(1,1) = 1D(2,1) = 3D(3,1) = 6D(4,1) = 9 … D(1,2) = 3D(2,2) = 2D(3,2) = 10D(4,2) = 7 … D(1,3) = 5D(2,3) = 10D(3,3) = 11D(4,3) = 9 … D(1,4) = 7D(2,4) = 7D(3,4) = 9D(4,4) = 10 … D(1,5) = 10 D(2,5) = 9D(3,5) = 10D(4,5) = 10 … D(1,6) = 13D(2,6) = 11D(3,6) = 12D(4,6) = 12 … normalized distortion = 13/6 = 2.17 12 15 9 12 8 11 10 11 12 11 1213 1 3 5 7 7 10 9 13 1112

18 18 Dynamic Time Warping (DTW) Example: 4 5 7 8 7 9 2 3 5 6 7 8 3 2 2 1 4 5 1 2 1 2 1 3 3 4 3 3 3 1 7 6 4 5 4 2 heuristic paths: begin at (1,1), end at (6,6) P1=(1,1)(1,0) P2=(1,1) P3=(1,1)(0,1) 1 ½ ½ ½ ½ D(1,1) = D(2,1) = D(3,1) = D(4,1) = D(1,2) =D(2,2) = D(3,2) = D(4,2) = D(2,3) =D(3,3) = D(4,3) = D(5,3) = D(3,4) =D(4,4) = D(5,4) = D(6,4) = D(3,5) =D(4,5) = D(5,5) = D(6,5) = D(3,6) =D(4,6) = D(5,6) = D(6,6) =

19 19 Dynamic Time Warping (DTW) Local Distance Measures at one time frame t: Need to compare two frames of speech and measure how similar or dissimilar they are. Each frame has one feature vector, x t for the features from one signal and y t for the other signal. A distance measure should have the following properties: 0  d(x t,y t )   0 = d(x t,y t ) iff x t = y t d(x t,y t ) = d(x t,y t )(symmetry) d(x t,y t )  d(x t,z t ) + d(z t,y t )(triangle inequality) A distance measure should also, for speech, correlate well with perceived distance. Spectral domain is better than time domain for this; a perceptually-warped spectral domain is even better. (positive definiteness)

20 20 Dynamic Time Warping (DTW) Local Distance Measures at one time frame t: Simple solution: “city-block” distance (in log-spectral space) between two sets of signals represented by (vector) features x t and y t. where x t (f) is the log power spectrum of signal x at time t and frequency f with maximum frequency F-1 also the Euclidean distance: other distance measures: Itakura-Saito distance (also called Itakura-Saito distortion), COSH distance, likelihood ratio distance, etc… f can indicate simply a feature index, which may or may not correspond to a frequency band. e.g. 13 cepstral features c 0 through c 12.

21 21 Dynamic Time Warping (DTW) Termination Step The termination step is taking the value at the endpoint (the score of the least distortion over the entire utterance) and dividing by a normalizing factor. The normalizing factor is only necessary in order to compare the DTW result for this template with DTW from other templates. So, one method of normalizing is to divide by the number of frames in the template. This is quick, easy, and effective for speech recognition and comparing results of templates. Another method is to divide by the length of the path taken, adjusting the length by the slope weights at each transition. This requires going back and summing the slope values, so it’s slower. But, sometimes it’s more appropriate.

22 22 Dynamic Time Warping (DTW) DTW can be used to perform ASR by comparing input speech with a number of templates; the template with the lowest normalized distortion is most similar to the input and is selected as the recognized word. DTW provides both a historical and a logical basis for studying Hidden Markov Models… Hidden Markov Models (HMMs) can be seen as an advancement over DTW technology. “Sneak preview”:  DTW compares input speech against fixed template (local distortion measure); HMMs compare input speech against “probabilistic template.”  The search algorithm used in HMMs is also similar, but instead of a fixed set of possible paths, there are probabilities of all possible paths. Remaining question: what are the x t and y t features in Slide 18?

23 23 Features of the Speech Signal: (Log) Power Spectrum “Energy” or “Intensity”: intensity is sound energy transmitted per second (power) through a unit area in a sound field. [Moore p. 9] intensity is proportional to the square of the pressure variation [Moore p. 9] normalized energy = intensity x n = signal x at time sample n N = number of time samples

24 24 Features of the Speech Signal: (Log) Power Spectrum “Energy” or “Intensity”: human auditory system better suited to relative scales: energy (bels) = energy (decibels, dB) = I 0 is a reference intensity… if the signal becomes twice as powerful (I 1 /I 0 = 2), then the energy level is 3 dB (3.01023 dB to be more precise) Typical value for I 0 is 20  Pa. 20  Pa is close to the average human absolute threshold for a 1000-Hz sinusoid.

25 25 Features of the Speech Signal: (Log) Power Spectrum What makes one phoneme, /aa/, sound different from another phoneme, /iy/? Different shapes of the vocal tract… /aa/ is produced with the tongue low and in the back of the mouth; /iy/ is produced with the tongue high and toward the front. The different shapes of the vocal tract produce different “resonant frequencies”, or frequencies at which energy in the signal is concentrated. (Simple example of resonant energy: a tuning fork may have resonant frequency equal to 440 Hz or “A”). Resonant frequencies in speech (or other sounds) can be displayed by computing a “power spectrum” or “spectrogram,” showing the energy in the signal at different frequencies.

26 26 Features of the Speech Signal: (Log) Power Spectrum A time-domain signal can be expressed in terms of sinusoids at a range of frequencies using the Fourier transform: where x(t) is the time-domain signal at time t, f is a frequency value from 0 to 1, and X(f) is the spectral-domain representation.

27 27 Features of the Speech Signal: (Log) Power Spectrum Since samples are obtained at discrete time steps, and since only a finite section of the signal is of interest, the discrete Fourier transform is more useful: in which x(t) is the amplitude at time sample t, n is a frequency value from 0 to N-1, N is the number of samples or frequency points of interest, and X t (n) is the spectral-domain representation of x, beginning at time point t.

28 28 Features of the Speech Signal: (Log) Power Spectrum The magnitude and phase of the spectral representation are: Phase information is generally considered not important in understanding speech, and the energy (or power) of the magnitude of X t (n) on the decibel scale provides most relevant information: Note: usually don’t worry about reference intensity I 0 (assume a value of 1.0); the signal strength (in  Pa) is unknown anyway. absolute value of complex number

29 29 Features of the Speech Signal: (Log) Power Spectrum In DTW, what is x t (slide 18)? It’s a representation of the speech signal at one point in time, t. For DTW, we’ll use the log power spectrum at time t. The speech signal is divided into T frames (for each time point 1 … T); typically one frame is 10 msec. At each frame, a feature vector of the speech signal is computed. These features should provide the ability to discriminate between phonemes. For now we’ll use spectral features… later, we’ll switch to “cepstral” features. Each vertical line delineates one feature vector at time t, x t T=80

30 30 Dynamic Time Warping (DTW) Project First project: Implement DTW algorithm, perform automatic speech recognition “Template” code is available at the class web site to read in features, provide some context and a starting point. The features will be given are “real,” in that they are spectrogram values (energy levels at different frequencies) from utterances of “yes” and “no” sampled every 10 msec. For a local distance measure for each frame, use the Euclidean distance. Use the following heuristic paths: Give thought to the representation of paths in your code… make your code easily changed to specify new paths AND be able to use slope weights. (This will affect your grade). P1=(1,1)(1,0) P2=(1,1) P3=(1,1)(0,1) P1P1 P2P2 P3P3 1 1 1 1 1

31 31 Dynamic Time Warping (DTW) Project Align pair of files, and print out normalized distortion score: yes_template.txtinput1.txt no_template.txtinput1.txt yes_template.txtinput2.txt no_template.txtinput2.txt yes_template.txtinput3.txt no_template.txtinput3.txt Then, use results to perform rudimentary ASR… (1) is input1.txt more likely to be “yes” or “no”? (2) is input2.txt more likely to be “yes” or “no”? (3) is input3.txt more likely to be “yes” or “no”? You may have trouble along the way… good code doesn’t always produce an answer. Can you add to or modify the paths to produce an answer for all three inputs? If so, show the modifications and the new output.

32 32 Dynamic Time Warping (DTW) Project List 3 reasons why you wouldn’t want to rely on DTW for all of your ASR needs… Due on April 19 (Monday, 2½ weeks from now); send your source code recognition results (minimum normalized distortion scores for each comparison, as well as the best time warping between the two inputs) using the specified paths 3 reasons why wouldn’t want to rely on DTW… results using specifications given here, and results using any necessary modifications to provide answer for all three inputs. to ‘hosom’ at cslu  ogi  edu; late responses generally not accepted.

33 33 Issues in Developing ASR Systems There are a number of issues that impact the performance of an automatic speech recognition (ASR) system: Type of Channel  Microphone signal different from telephone signal, “land-line” telephone signal different from cellular signal.  Channel characteristics: pick-up pattern (omni-directional, unidirectional, etc.) frequency response, sensitivity, noise, etc.  Typical channels: desktop boom mic:unidirectional, 100 to 16000 Hz hand-held mic:super-cardioid, 60 to 20000 Hz telephone:unidirectional, 300 to 8000 Hz  Training on data from one type of channel automatically “learns” that channel’s characteristics; switching channels degrades performance.

34 34 Issues in Developing ASR Systems Speaker Characteristics  Because of differences in vocal tract length, male, female, and children’s speech are different.  Regional accents are expressed as differences in resonant frequencies, durations, and pitch.  Individuals have resonant frequency patterns and duration patterns that are unique (allowing us to identify speaker).  Training on data from one type of speaker automatically “learns” that group or person’s characteristics, makes recognition of other speaker types much worse.  Training on data from all types of speakers results in lower performance than could be obtained with speaker-specific models.

35 35 Issues in Developing ASR Systems Speaking Rate  Even the same speaker may vary the rate of speech.  Most ASR systems require a fixed window of input speech.  Formant dynamics change with different speaking rates.  ASR performance is best when tested on same rate of speech as training data.  Training on a wide variation in speaking rate results in lower performance than could be obtained with duration- specific models.

36 36 Issues in Developing ASR Systems Noise  Two types of noise: additive, convolutional  Additive: e.g. white noise (random values added to waveform)  Convolutional: filter (additive values in log spectrum)  Techniques for removing noise: RASTA, Cepstral Mean Subtraction (CMS)  (Nearly) impossible to remove all noise while preserving all speech (nearly impossible to separate speech from noise)  Stochastic training “learns” noise as well as speech; if noise changes, performance degrades.

37 37 Issues in Developing ASR Systems Vocabulary  Vocabulary must be specified in advance (can’t recognize new words)  Pronunciation of each word must be specified exactly (phonetic substitutions may degrade performance)  Grammar: either very simple but with likelihoods of word sequences, or highly structured  Reasons for pre-specified vocabulary, grammar constraints: phonetic recognition so poor that confidence in each recognized phoneme usually very low. humans often speak ungrammatically or disfluently.

38 38 Issues in Developing ASR Systems Comparing Human and Computer Performance Human performance: Large-vocabulary corpus (1995 CSR Hub-3) consisting of North American business news recorded with 3 microphones. Average word error rate of 2.2%, best word error rate of 0.9%, “committee” error rate of 0.8% Typical errors: “emigrate” vs. “immigrate”, most errors due to inattention. Computer performance: Similar large-vocabulary corpus (1998 Broadcast News Hub-4) Best performance of 13.5% word error rate, (for < 10x real time, best performance of 16.1%), a “committee” error rate of 10.6% More recent focus on natural speech… best error rates of  20% This is consistent with results from other tasks: a general order-of-magnitude difference between human and computer performance; computer doesn’t generalize to new conditions.


Download ppt "1 CS 552/652 Speech Recognition with Hidden Markov Models Spring 2010 Oregon Health & Science University Center for Spoken Language Understanding John-Paul."

Similar presentations


Ads by Google