Presentation is loading. Please wait.

Presentation is loading. Please wait.

Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring 200423/9/2004.

Similar presentations


Presentation on theme: "Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring 200423/9/2004."— Presentation transcript:

1 Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring 200423/9/2004

2 Abstract Goal : Estimation of glottal volume velocity (also called glottal pulse) from acoustic speech signal samples. Three estimation methods are examined: 1. Least Squares Glottal Inverse Filtering from the Acoustic Speech Waveform – by Wong, Markel & Gray, 1979. 2. Pitch Synchronous Iterative Adaptive Inverse Filtering (PSIAIF) – by Alku, 1992. 3. Estimation of the Glottal Flow Derivative Waveform Through Formant Modulation (From: Modeling of the Glottal Flow Derivative Waveform with Application to Speaker Identification) – by Plumpe, Quatieri & Reyndols, 1997.

3 Applications Speech synthesis – knowledge of the glottal frequency is important to produce a synthetic speech that sounds natural. There are explicit differences between male and female glottal pulses. Different glottal excitations produce different phonation types: normal, pressed, breathy. Glottal pulse has great importance in determining speech types : angry voice, soft voice, happy voice, etc.

4 Discrete-Time System Model for Speech Production Voiced/unvoiced switch For voiced speech : the input to is the glottal pulse, For unvoiced speech : the input to is a random noise Denote:

5 Least Squares Glottal Inverse Filtering from the Acoustic Speech Waveform (Wong, Markel and Gray) The vocal-tract model is assumed to be an all-pole model : where K is an even integer. The lip radiation model is given by a differencing filter :

6 Then, we can estimate the glottal volume velocity transfer function : where is an all-zero filter: Assume that an M-th order analysis filter of the form is to be obtained using covariance method of linear prediction of the speech signal. Z-Transform gives: The problem is estimating the vocal-tract transform,

7 Analysis Procedure – Block Diagram Linear Phase High- Pass Filter Sequential Covariance Analysis Normalized Error Criterion Pitch Detection Searching for Minimal Periods Vocal Tract Model Estimation Polynomial Root Solving LPC

8 3. Normalized Error Criterion – Obtaining by : 2. Sequential Covariance Analysis – An N-length analysis window is sequentially moved one sample at a time throughout. we obtain the total squared error : 1. Linear Phase High-Pass Filter – The speech signal is passed through an high pass filter. when: Algorithm Stages where is defined by:

9 4. Searching for Minimal Values Periods – Scanning to find the intervals where it gets minimal values. we denote the first and last samples in each interval by :, These intervals are needed for determining the points of glottal closure and opening :,

10 6. Polynomial Root Solving – Removing real poles (close to zero frequency) and high bandwidth poles, from the filter. 5. Vocal Tract Model Estimation – The prediction error filter is estimated using LPC at each closed phase interval, determined by,. 7. Inverse Filtering + Integration – The original speech signal is passed through the inverse filter of, and then through an integrator. Finally, we obtain the estimation for the glottal pulse -.

11 Example of Glottal Pulse Estimation with LS Algorithm for Normal AA Vowel :

12 Example of Glottal Pulse Estimation with LS Algorithm for Pressed AA Vowel :

13 Algorithm Drawbacks Normalized Error Criterion Calculation - In long voice signals a problem of over-complexity may appear. Closed Period Identification – In noisy voice signals it may be difficult to determine where the normalized error criterion,, gets its minimal values (phase 4). An insufficiently accurate closed period identification causes poor glottal pulse estimation. Minimal Values Periods Criterion – The numerical criterion for determining the minimal values periods of may need to be adapted to some voice signals.

14 PSIAIF - Pitch Synchronous Iterative Adaptive Inverse Filtering (Alku) A reliable response to some drawbacks in the first Inverse Filtering algorithm. This algorithm is based on the speech production model: Glottal ExcitationLip RadiationVocal Tract Speech Assumptions for this model: 1. the model is linear and time-invariant during a short time interval. 2. the interaction between different processes is negligible. 3. the lip radiation effect is modeled with a fixed differentiator.

15 The PSIAIF Analysis Method The main idea: we can estimate the vocal tract accurately enough with LPC analysis, if the tilting effect of the glottal source is eliminated from the speech spectrum. Estimation of the glottal pulse is computed in the IAIF- algorithm with an iterative structure that is repeated twice. IAIF Method: PSIAIF Method: In order to improve the performance of LPC analysis in the estimation of the vocal tract transfer function, the final glottal wave estimate is computed pitch synchronously.

16 Structure of the IAIF Algorithm LPC analysis of order 1 Integration Inverse Filtering LPC analysis of order Inverse Filtering Integration Inverse Filtering LPC analysis of order Inverse Filtering LPC analysis of order

17 Structure of the PSIAIF Algorithm High-Pass Filtering Pitch Synchronism IAIF-1 IAIF-2 The speech signal to be analyzed is denoted. The estimated glottal excitation is denoted. The speech signal is high-pass filtered. The high-pass filtered signal,, is used as an input to the first IAIF-analysis. The output is one frame of a pitch asynchronously glottal wave estimate,.

18 The time indices of maximum glottal openings,, are computed for each frame of. This computation requires the knowledge of - the average length of pitch period. Preliminary knowledge of helps us focusing the search of maximum glottal openings on short time periods. The final estimate for the glottal excitation is obtained by analyzing the high-pass filtered speech signal,, with the IAIF-algorithm pitch synchronously.

19 Example of Glottal Pulse Estimation with PSIAIF Algorithm for Normal AA Vowel :

20 Example of Glottal Pulse Estimation with PSIAIF Algorithm for Breathy AA Vowel :

21 Estimation of the Glottal Flow Derivative Waveform Through Formant Modulation (Plumpe) This algorithm is similar to Wong’s Least-Squares algorithm, with few differences (principles and implementation). The vocal-tract model is assumed to be an all-pole model : where K is an even integer. The main goal is to estimate the vocal-tract transfer function, using the covariance method of linear prediction. When we obtain the vocal-tract model estimation, we can easily estimate the glottal flow derivative :

22 Analysis Procedure – Block Diagram Linear Phase High- Pass Filter Speech Waveform Whitening Peak Picking Pitch Detection Measuring Formant Frequencies Formant Tracking Setting Initial Stationary Region LPC Extending Initial Stationary Region Vocal Tract Model Estimation Polynomial Root Solving LPC

23 1. Linear Phase High-Pass Filter – The speech signal is passed through an high pass filter. Algorithm Stages 2. Speech Waveform Whitening – The high-pass filtered speech signal is whitened by inverse filtering with covariance method solution, using a one pitch-period frame update and a two pitch-period analysis window. Real zeros are removed from LPC solution. A rough estimation of the glottal flow derivative is obtained -. 3. Peak Picking – The obtained rough estimation,, is scanned to identify the approximate time of glottal pulses through negative peak picking. The negative peaks are marked by :.

24 Example of Whitened Speech Waveform Peak Picking for Pressed AA Vowel :

25 4. Measuring Formant Frequencies – At each glottal cycle, a sliding covariance-based linear prediction analysis with a one-sample shift is used. The size of rectangular analysis window is, where is linear prediction order. A vocal-tract estimate is found for each window. 5. Formant Tracking – At each glottal cycle, the four lowest formants - calculated from the vocal-tract estimates - are tracked by their frequency using a Viterbi search. The cost function is the variance of the formant track including the proposed pole to be added to the end of the track. We obtain the formant track,.

26 Example of Formant Tracking for Pressed AA Vowel :

27 6. Setting Initial Stationary Region – Within each glottal cycle, we define a formant change function as: The argument is varied to minimize : where is linear prediction order, is glottal cycle length. The initial stationary formant region is set to be : This region is denoted by :. 7. Extending Initial Stationary Region – The initial stationary formant region is extended to obtain the stationary formant region -. The extension to right is based on the following procedure :

28 Identify Initial Stationary Region. Calculate Average and Standard Deviation over Interval. Is Include the Point in the Stationary Region Extend the Region to Left NO YES Extending to Left : The final mean and standard deviation are kept constant.

29 9. Polynomial Root Solving – Removing real poles (close to zero frequency) and high bandwidth poles, from the filter. 8. Vocal Tract Model Estimation – The prediction error filter is estimated using LPC at each stationary formant region, determined by,. 10. Inverse Filtering – The original speech signal is passed through the inverse filter of, to obtain the estimation for the glottal pulse derivative -.

30 Example of Glottal Pulse Estimation with FM Algorithm for Normal AA Vowel :

31 Example of Glottal Pulse Estimation with FM Algorithm for Pressed AA Vowel :

32 Algorithm Drawbacks Initial Stationary Region Extension - In some voice signals, the first formant frequency is not stable during the closed phase. Hence, an accurate determination of a formant stationary region is depended on a single numerical parameter.


Download ppt "Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring 200423/9/2004."

Similar presentations


Ads by Google