Download presentation
Presentation is loading. Please wait.
Published byJeffery Park Modified over 9 years ago
1
So far: Historical introduction Mathematical background (e.g., pattern classification, acoustics) Feature extraction for speech recognition (and some neural processing) What sound units are typically defined Audio signal processing topics (pitch extraction, perceptual audio coding, source separation, music analysis) Now – back to pattern recognition, but include time
2
Deterministic Sequence Recognition
3
Sequence recognition for ASR ASR = static pattern classification + sequence recognition Deterministic sequence recognition: template matching Templates are typically word-based; don’t need phonetic sound units per se Still need to put together local distances into something global (per word or utterance)
5
Front end analysis Basic approach the same for deterministic, statistical: – 25 ms windows (e.g., Hamming), 10 ms steps (a frame) – Some kind of cepstral analysis (e.g., MFCC or PLP) – Cepstral vector at time n called x n
6
Speech sound categories Words, phones most common For template-based ASR, mostly words For template-based ASR, local distances based on examples (reference frames) versus input frames
7
From Frames to Sequence Easy if local matches are all correct (never happens!) Local matches are unreliable Need measure of goodness of fit Need to integrate into global measure Need to consider all possible sequences
9
Templates: Isolated Word Example Matrix for comparison between frames Word template = multiple feature vectors Reference template = Input template = Need to find D(, )
10
Templates Matching Problem Time Normalization Which references to use Defining distances/costs Endpoints for input templates
11
Time Normalization Linear Time Normalization Nonlinear Time Normalization – Dynamic Time Warp (DTW)
13
Linear Time Normalization: Limitations Speech sounds stretch/compress differently Stop consonants versus vowels Need to normalize differently
15
Generalized Time Warping Permit many more variations Ideally, compare all possible time warpings Vintsyuk (1968): use dynamic programming
16
Dynamic programming Bellman optimality principle (1962): optimal policy given optimal policies from sub problems Best path through grid: if best path goes through grid point, best path includes best partial path to grid point Classic example: knapsack problem
17
Knapsack problem Stuffing a sack with items, different value Goal: maximize value in sack Key point 1: If max size is 10, and we know values of solutions for max size of 9, we can compute the final answer knowing the value of adding items. Key point 2: Point 1 sounds recursive, but can be made efficiently nonrecursive by building a table
18
Basic DTW step w/ simple local constraints. Each (i,j) cell has local distance d and cumulative distortion D. The eqn shows the basic computational step.
19
Dynamic Time Warp (DTW) Apply DP to ASR: Vintsyuk, Bridle, Sakoe Let D(i,j) = total distortion up to frame i in input and frame j in reference Let d(i,j) = local distance between frame i in input and frame j in reference Let p(i,j) = set of possible predecessors to frame i in input and frame j in reference D(i,j) = d(i, j) + min p(i,j) D(p(i,j))
20
DTW steps (1) Compute local distance d in 1 st column(1 st frame of input) for each reference template. Let D(0,j) = d(0,j) for each cell in each template (2) For i=1 (2 nd column), j=0, compute d(i,j) add to min of all possible predecessor values of D to get local value of D; repeat for each frame in each template. (3) Repeat (2) for each column to the end of input (4) For each template, find best D in last column of input (5) Choose the word for the template with smallest D
21
DTW Complexity O(Nframes ref. Nframes in. Ntemplates) Storage, though can just be O(Nframes ref. Ntemplates) (store current column and previous column) Constant reduction: global constraints Constant reduction: local constraints
22
Typical global slope constraints for dynamic programming
25
Which reference templates? All examples? Prototypes? DTW-based global distances permit clustering
26
DTW-based K-means (1) Initialize (how many, where) (2) Assign examples to closest center (DTW distance) (3) For each cluster, find template with minimum value for maximum distance, call it the center (4) Repeat (2) and (3) until some stopping criterion is reached (5) Use center templates as references for ASR
27
Defining local distance Normalizing for scale Cepstral weighting Perceptual weighting, e.g., JND Learning distances, e.g., with ANN, statistics
28
Endpoint detection: big problem! Sounds easy Hard in practice (noise, reverb, gain issues) Simple systems use energy, time thresholds More complex ones also use spectrum Can be tuned Not robust
30
Connected Word ASR by DTW Time normalization Recognition Segmentation Can’t have templates for all utterances DP to the rescue
31
DP for Connected Word ASR by DTW Vintsyuk, Bridle, Sakoe Sakoe: 2-level algorithm Vintsyuk, Bridle: one stage Ney explanation Ney, H., “The use of a one-stage dynamic programming algorithm for connected word recognition,” IEEE Trans. Acoust. Speech Signal Process. 32: 263-271, 1984
32
Connected Algorithm In principle: one big distortion matrix (for 20,000 words, 50 frames/word, 1000 frame input [10 seconds] would be 10 9 cells!) Also required, backtracking matrix (since word segmentation not known) Get best distortion Backtrack to get words Fundamental principle: find best segmentation and classification as part of the same process, not as sequential steps
33
DTW path for connected words
34
DTW for connected words In principle, backtracking matrix points back to best previous cell Mostly just need backtrack to end of previous word Simplifications possible
35
Storage efficiency Distortion matrix -> 2 columns Backtracking matrix -> 2 rows “From template” points to template with lowest cost ending here “From frame” points to end frame of previous word
37
More on connected templates “Within word” local constraints “Between word” local constraints Grammars Transition costs
38
Knowledge-based segmentation DTW combines segmentation, time norm, recognition; all segmentations considered Same feature vectors used everywhere Could segment separately, using acoustic- phonetic features cleverly Example: FEATURE, Ron Cole (1983)
39
Limitations of DTW approach No structure from subword units Average or exemplar values only Cross-word pronunciation effects not handled Limited flexibility for distance/distortion Limited mathematical basis -> Statistics!
40
Epilog: “episodic” ASR Having examples can get interesting again when there are many of them Potentially an augmentation of stat methods Recent experiments show decent results Somewhat different properties -> combination
41
The rest of the course Statistical ASR Speech synthesis Speaker recognition Speaker diarization Oral presentations on your projects Written report on your project
42
Class project timing Week of April 30: no class Monday, double class Wednesday May 2 (is that what people want?) 8 oral presentations by individuals, 12 minutes each + 3 minutes for questions 2 oral presentations by pairs – 17 minutes each + 3 minutes for questions 3:10 PM to 6 PM with a 10 minute mid-session break Written report due Wednesday May 9, no late submissions (email attachment is fine)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.