Presentation is loading. Please wait.

Presentation is loading. Please wait.

Survey of Robust Techniques

Similar presentations


Presentation on theme: "Survey of Robust Techniques"— Presentation transcript:

1 Survey of Robust Techniques
2005/5/26 Presented by Chen-Wei Liu

2 Conferences Log-Energy Dynamic Range Normalization for Robust Speech Recognition Weizhong Zhu and Douglas O’Shaughnessy INRS-EMT, University of Quebec, Canada ICASSP 2005 Static and Dynamic Spectral Features : Their Noise Robustness and Optimal Weights for ASR Chen Yang, Tan Lee, The Chinese University of Hong Kong Frank K. Soong, ATR, Kyoto, Japan

3 Introduction Methods of robust speech recognition can be classified into two approaches Front-end processing for speech feature extraction Back-end processing for HMM decoding Compensation for noise in The front-end processing method is to suppress the noise and get more robust parameters The back-end processing is to compensate for noise and adapt the parameters inside the HMM system This paper focus on the first approach

4 Introduction Comparing with cepstral coefficients, the log-energy feature has quite different characteristics Log of summation of energy of all samples in one frame (logE) Summation of log filter bank (c0) This paper tries to find a more effective way, named log-energy dynamic range normalization (ERN), to remove the effects of additive noise By minimizing mismatch between training and testing data

5 Energy Dynamic Range Normalization
Observations Elevated minimum value Valleys are buried by additive noise energy, while peaks are not affected as much

6 Energy Dynamic Range Normalization
The larger difference on valleys leads to a mismatch between the clean and noisy speech To minimize the mismatch, this paper suggests an algorithm to scale the log-energy feature sequence of clean speech In which it lifts valleys while it keeps peaks unchanged Log-energy dynamic range is defined as follows 分貝( decibel ) dB 是decibel 的簡稱及簡寫.中文一般譯為「分貝」或「分貝爾」, 分貝是「貝爾」( Bell ) 的十分之一 ( 1 / 10 ). 「貝爾」是用來表示電信功率訊號的增益和衰減的單位.1個貝爾的增益是以功率在放大後與放大之前 的比值.在實用上,為了方便,通常使用貝爾的十分之一,即「dB 」為單位.在術學上,貝爾就是對數的倍數值, 乘以10 的值即為分貝值.

7 Energy Dynamic Range Normalization
In the presence of noise, is affected by additive noise, while is not affected as much Let and target energy dynamic range as X ;then the above equation becomes Is this way, it can use to set the target minimum value based on a given target dynamic range

8 Energy Dynamic Range Normalization
The following are the steps of the proposed log-energy feature dynamic range normalization algorithm 1st : find Max = and Min = 2nd : calculate target 3rd : if then 4th 4th : for i=1…n,

9 Energy Dynamic Range Normalization
The scaling effect is decreased as its own value goes up and the maximum of the sequence is unchanged

10 Experimental Results Linear Scaling
The proposed method was evaluated on the Aurora 2.0

11 Experimental Results Non Linear Scaling
Using non-linear scaling of equation as follows

12 Experimental Results Comparison of Linear Scaling and Non-linear Scaling
Performance comparisons at different SNR levels are shown as follows

13 Experimental Results Combination with other techniques

14 Conclusions When systems were trained on a clean speech training set, the proposed technique can have overall about a 30.83% relative performance Like CMS, the proposed method does not require any prior knowledge of noise and level Reducing mismatch in log-energy leads to a large recognition improvement

15 Conferences Log-Energy Dynamic Range Normalization for Robust Speech Recognition Weizhong Zhu and Douglas O’Shaughnessy INRS-EMT, University of Quebec, Canada ICASSP 2005 Static and Dynamic Spectral Features : Their Noise Robustness and Optimal Weights for ASR Chen Yang, Tan Lee, The Chinese University of Hong Kong Frank K. Soong, ATR, Kyoto, Japan

16 Introduction Dynamic cepstral features can help static features to characterize the speech trajectory on its time varying rate It has been shown that such a representation (static + dynamic) yields higher speech and speaker recognition performance than static cepstra only This paper tries to quantify the robustness of static and dynamic features under different types of noise and variable SNRs

17 Noise Robustness Analysis Recognition with only Static or Dynamic Features

18 Noise Robustness Analysis Static and Dynamic Cepstral Distances between Clean and Noisy Speech
For a given sequence of noisy speech observation, the output likelihood is presented as follows by single Gaussian for simplicity : The mismatch between clean and noisy conditions lies mainly on the exponent term which can be re-written as : Sequence of noisy speech observation A B A B Expected value is zero

19 Noise Robustness Analysis Static and Dynamic Cepstral Distances between Clean and Noisy Speech
Since the expected value of the second term is zero, the difference of likelihood between noisy and clean speech is just the first term, measured by defining a cepstral distance as follows : Where is used to approximate the diagonal covariance, in the clean speech model denotes the time average over the whole utterance

20 Noise Robustness Analysis Static and Dynamic Cepstral Distances between Clean and Noisy Speech
The weighted distances between clean and noisy speech for both the static and dynamic features, respectively: Where the superscripts d and s denote the dynamic and the static features

21 Noise Robustness Analysis Static and Dynamic Cepstral Distances between Clean and Noisy Speech
The following depicts the scatter diagrams of dynamic distance (between clean and noisy dynamic cepstra) vs. its static counterpart :

22 Noise Robustness Analysis Static and Dynamic Cepstral Distances between Clean and Noisy Speech
Two observations can be made on the figure Both distances are larger for increasingly mismatched conditions at lower SNRs Majority points fall below the diagonal line. In other words, the dynamic cepstral distance between noisy and clean features is smaller than its static counterpart

23 Exponential Weighting in Decoding Exponential Weightings
Based on the findings in the previous figure It would make sense to weight the log likelihoods of the static and dynamic features differently in decoding to exploit their uneven noise robustness The output likelihood of an observation can be split into two separate corresponding terms, d and s, as : The acoustic likelihood components can be computed with different exponential weightings as :

24 Exponential Weighting in Decoding Recognition with Bracketed Weightings
Testing by bracketing the two weights at a step of 0.1 with the constraint of unity sum

25 Exponential Weighting in Decoding Discriminative Weight Training (Weight Optimization)
The log likelihood difference (lld) between the recognized and the correct states is chosen as the objective function for optimization For the u-th speech utterance of T observations The lld is as follows : The cost averaged over the whole training set of U utterances is :

26 Exponential Weighting in Decoding Discriminative Weight Training (Weight Optimization)
This cost is minimized by adjusting iteratively the dynamic weight and the static weight, via the steepest descent as :

27 Experimental Results Evaluation on Aurora2.0 Database
Overall, a 36.6% relative WER reduction is obtained :

28 Experimental Results Evaluation on CUDIGIT Database
The relative WER improvement is 41.9%, averaged over all noise conditions

29 Conclusions The dynamic features were found to be more resilient to additive noise interference than their static counterpart Optimal exponential weights for exploiting the unequal robustness of the two cepstral features were used, and better performance were obtained


Download ppt "Survey of Robust Techniques"

Similar presentations


Ads by Google