Download presentation
Published byLaura Garrett Modified over 9 years ago
1
ROBUST SIGNAL REPRESENTATIONS FOR AUTOMATIC SPEECH RECOGNITION
Richard Stern Department of Electrical and Computer Engineering and School of Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213 Telephone: (412) ; FAX: (412) Institute for Mathematics and its Applications University of Minnesota September 19, 2000
2
Introduction As speech recognition is transferred from the laboratory to the marketplace robust recognition is becoming increasingly important “Robustness” in 1985: Recognition in a quiet room using desktop microphones Robustness in 2000: Recognition over a cell phone in a car with the windows down and the radio playing at highway speeds
3
What I’ll talk about today ...
Why we use cepstral-like representations Some “classical” approaches to robustness Some “modern” approaches to robustness Some alternate representations Some remaining open issues
4
The source-filter model of speech
A useful model for representing the generation of speech sounds: Amplitude Pitch Pulse train source Noise source Vocal tract model p[n]
5
Implementation of MFCC processing
Compute magnitude-squared of Fourier transform Apply triangular frequency weights that represent the effects of peripheral auditory frequency resolution Take log of outputs Compute cepstra using discrete cosine transform Smooth by dropping higher-order coefficients
6
Implementation of PLP processing
Compute magnitude-squared of Fourier transform Apply triangular frequency weights that represent the effects of peripheral auditory frequency resolution Apply compressive nonlinearities Compute discrete cosine transform Smooth using autoregressive modeling Compute cepstra using linear recursion
7
Rationale for cepstral-like parameters
The cepstrum is the inverse transform of the log of the magnitude of the spectrum Useful for separating convolved signals (like the source and filter in the speech production model) “Homomorphic filtering” Alternatively, cepstral processing be thought of as the Fourier series expansion of the magnitude of the Fourier transform
8
An example
9
The vowel /uh/ in “one” after windowing
10
The raw spectrum
11
Signal representations in MFCC processing
ORIGINAL SPEECH MEL LOG MAGS AFTER CEPSTRA
12
Additional parameters typically used
Delta cepstra and delta-delta cepstra Power and delta power Comment: These features restore (some) temporal dependencies … more heroic approaches exist as well (e.g. Alwan, Hermansky)
13
Challenges in robust recognition
“Classical” problems: Additive noise Linear filtering “Modern” problems: Transient degradations Very low SNR “Difficult” problems: Highly spontaneous speech Speech masked by other speech
14
“Classical” robust recognition: A model of the environment
“Clean” speech x[m] h[m] Linear Filtering n[m] Additive Noise + z[m] Degraded speech
15
AVERAGED FREQUENCY RESPONSE FOR SPEECH AND NOISE
Close-talking microphone: Desktop microphone:
16
Representation of environmental effects in cepstral domain
Power spectra: Effect of noise and filtering on cepstral or log spectral features: or where is referred to as the “environment function” x[m] h[m] n[m] + z[m]
17
Another look at environmental distortions: Additive environmental compensation vectors
Environment functions for the PCC-160 cardiod desktop mic: Comment: Functions depend on SNR and phoneme identity
18
Highpass filtering of cepstral features
Examples: CMN (CMU et al., RASTA, J-RASTA (OGI/ICSI/IDIAP et al.), multi-level CMN (Microsoft, et al.) Comments: Application to cepstral features compensates for linear filtering; application to spectral features compensates for additive noise “Great value for the money” ^ z x Highpass filter
19
Two common cepstral highpass filters
CMN (Cepstral Mean Normalization): RASTA (Relative Spectral Processing, 1994 version):
20
“Frequency response” of CMN and RASTA filters
Comment: Both RASTA and CMN have zero DC response
21
Principles of model-based environmental compensation
Attempt to estimate parameters characterizing unknown filter and noise that when applied in inverse fashion will maximize the likelihood of the observations x[m] h[m] n[m] + z[m]
22
Model-based compensation for noise and filtering: The VTS algorithm
The VTS algorithm (Moreno, Raj, Stern, 1996): Approximate f(x,n,q) by the first several terms of its Taylor series expansion, assuming that n and q are known The effects of f(x,n,q) on the statistics of the speech features then can be obtained analytically The EM algorithm is used to find the values of n and q that maximize the likelihood of the observations The statistics of the incoming cepstral vectors are re-estimated using MMSE techniques
23
The good news: VTS improves recognition accuracy in “stationary” noise
(1990) Comment: More accurate modeling of VTS improves recognition accuracy at all SNRs compared to CDCN and CMN
24
But the bad news: Model-based compensation doesn’t work very well in transient noise
CDCN does not improve speech recognition errors in music very much
25
So what can we do about transient noises?
Two major approaches: Sub-band recognition (e.g. Bourlard, Morgan, Hermansky et al.) Missing-feature recognition (e.g. Cooke, Green, Lippmann et al.) At CMU we’ve been working on a variant of the missing-feature approach
26
MULTI-BAND RECOGNITION
Basic approach: Decompose speech into several adjacent frequency bands Train separate recognizers to process each band Recombine information (somehow) Comment: Motivated by observation of Fletcher (and Allen) that the auditory system processes speech in separate frequency bands Some implementation decisions: How many bands? At what level to do the splits and merges? How to recombine and weight separate contributions?
27
MISSING-FEATURE RECOGNITION
General approach: Determine which cells of a spectrogram-like display are unreliable (or “missing”) Ignore missing features or make best guess about their values based on data that are present
28
ORIGINAL SPEECH SPECTROGRAM
29
SPECTROGRAM CORRUPTED BY WHITE NOISE AT SNR 15 dB
Some regions are affected far more than others
30
IGNORING REGIONS IN THE SPECTROGRAM THAT ARE CORRUPTED BY NOISE
All regions with SNR less than 0 dB deemed missing (dark blue) Recognition performed based on colored regions alone
31
Filling in missing features at CMU (Raj)
We modify the incoming features rather than the internal models (which is what has been done at Sheffield) Why modify the incoming features? More flexible feature set (can use cepstral rather than log spectral features) Simpler processing No need to modify recognizer
32
Recognition accuracy using compensated cepstra, speech corrupted by white noise
Cluster Based Recon. Spectral Subtraction Temporal Correlations Accuracy (%) Baseline SNR (dB) Large improvements in recognition accuracy can be obtained by reconstruction of corrupted regions of noisy speech spectrograms Knowledge of locations of “missing” features needed
33
Recognition accuracy using compensated cepstra, speech corrupted by music
Cluster Based Recon. Spectral Subtraction Temporal Correlations Accuracy (%) Baseline SNR (dB) Recognition accuracy goes up from 7% to 69% at 0 dB with cluster based reconstruction
34
So how can we detect “missing” regions?
Current approach: Pitch detection to comb out harmonics in voiced segments Multivariate Bayesian classifiers using several features such as Ratio of power at harmonics relative to neighboring frequencies Extent of temporal synchrony to fundamental frequency How well we’re doing now with blind identification: About half way between baseline results and results using perfect knowledge of which data are missing About 25% of possible improvement for background music
35
Missing features versus multi-band recognition
Multi-band approaches are typically implemented with a relatively small number of channels while …. …. with missing feature approaches, every time-frequency point can be considered or ignored The full-combination method for multi-band recognition considers every possible combination of present or missing bands, eliminating the need for blind identification of optimal combination of inputs Nevertheless, missing-feature approaches may provide superior recognition accuracy because they enable a finer partitioning of the observation space if we could solve the identification problem
36
Some other types of representations
Physiologically-motivated representations (“ear models”) Seneff, Ghitza, Lyon/Slaney, Patterson, etc. Feature extraction using “smart” nonlinear transformations Hermansky et al.
37
Physiologically-motivated speech processing
In recent years signal processing motivated by knoweldge of human auditory perception has become more popular Abilities of human audition form a powerful existence proof
38
Some auditory principles that system developers consider
Structure of auditory periphery: Linear bandpass filtering Nonlinear rectification with saturation/gain control Further analysis Dependence of bandwidth of peripheral filters on center frequency Nonlinear phenomena: Saturation Lateral suppression Temporal response: Synchrony and phase locking at low frequencies
39
An example: The Seneff model
40
Timing information in the Seneff model
Seneff model includes the effects of synchrony at low frequencies Synchrony detector in Seneff model records extent to which response in a frequency band is phase-locked with the channel’s center frequency Local synchrony has been shown to represent vowels more robustly in the peripheral auditory system in the presence of additive noise (e.g. Young and Sachs) Related work by Ghitza, DeMori, and others shows improvements in recognition accuracy relative to features based on mean rate, but at the expense of much more computation
41
COMPUTATIONAL COMPLEXITY OF AUDITORY MODELS
Number of multiplications per ms of speech: Comment: auditory computation is extremely expensive
42
Some other comments on auditory models
“Correlogram”-type representations (channel-by-channel running autocorrelation functions) being explored by some researchers (Slaney, Patterson, et al.) Much more information in display Auditory models have not yet realized their full potential because ... Feature set must be matched to classification system ….. features generally not Gaussian All aspects of available feature must be used Research groups need both auditory and ASR experts
43
“Smart” feature extraction using non-linear transformations (Hermansky group)
Complementary approaches using temporal slices (mostly): Temporal linear discriminant analysis (LDA) to obtain maximally-discriminable basis functions over a ~1-sec interval in each critical band Three vectors with greatest eigenvalues are used as RASTA-like filters in each of 15 critical bands Karhunen-Loeve transform used to reduce dimensionality down to 39 based on training data TRAP features Use MLP to provide nonlinear mapping from temporal trajectories to phoneme likelihoods Modulation-filtered spectrogram (MSG) Pass spectrogram features through two temporal modulation filters (0-8 Hz and 8-16 Hz)
44
Use of nonlinear feature transformations in Aurora evaluation
Multiple feature sets combined by averaging feature values after nonlinear mapping Best system combines transformed PLP features, transformed MSG features, plus TRAP features (63% improvement over baseline!) Aurora evaluation system used reduced temporal span and other shortcuts to meet delay, processing time, and memory specs of evaluation (40% net improvement over baseline) Comment: Procedure effectively moves some of the “training” to the level of the features …. generalization to larger tasks remains to be verified
45
Feature combination versus compensation combination: The CMU SPINE System
46
SPINE evaluation conditions
47
The CMU SPINE system (Singh)
Three feature sets considered: Mel cepstra PLP cepstra Mel cepstra of lowpass filtered speech Four compensation schemes: Codeword Dependent Codebook Normalization (CDCN) Vector Taylor Series (VTS) Singular Value Decomposition (SVD) Karhunen-Loeve Transform-based noise cancellation (KLT) Additional features from ICSI/OGI: PLP cepstra subjected to MLP and KL transform for orthogonalization
48
Summary of CMU and CMU-ICSI-OGI SPINE results
(MFCC) 4 Comp. 4 Feat. ICSI/OGI 3 Feat.
49
Comments Some techniques we haven’t discussed:
VTLN Microphone arrays Time-frequency representations (e.g. wavelets) Robustness to Lombard speech, speaking style, etc. Many others Some hard problems not addressed: Very low SNR ASR Highly spontaneous speech (!) A representation or pronunciation modeling issue?
50
Summary Despite many shortcomings, cepstral-based features are well motivated, typically augmented by cepstral highpass filtering “Classical” model-based robustness techniques work reasonably well in combating quasi-stationary degradations “Modern” multiband and missing-feature techniques show great promise in coping with transient interference , etc. Auditory models remain appealing, although their potential has not yet been realized “Smart” features can provide dramatic improvements, at least in small tasks Feature combination will be key component of future systems
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.