EE599-020 Audio Signals and Systems Speech Production Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.

Slides:



Advertisements
Similar presentations
Vowel Formants in a Spectogram Nural Akbayir, Kim Brodziak, Sabuha Erdogan.
Advertisements

From Resonance to Vowels March 8, 2013 Friday Frivolity Some project reports to hand back… Mystery spectrogram reading exercise: solved! We need to plan.
Speech & Audio Coding TSBK01 Image Coding and Data Compression Lecture 11, 2003 Jörgen Ahlberg.
Liner Predictive Pitch Synchronization Voiced speech detection, analysis and synthesis Jim Bryan Florida Institute of Technology ECE5525 Final Project.
EE513 Audio Signals and Systems Digital Signal Processing (Synthesis) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Speech Processing. References L.R. Rabiner and R.W. Schafer. Digital Processing of Speech Signals. Prentice-Hall, Lawrence Rabiner and Biing-Hwang.
The Human Voice. I. Speech production 1. The vocal organs
Speaker Recognition Sharat.S.Chikkerur Center for Unified Biometrics and Sensors
A 12-WEEK PROJECT IN Speech Coding and Recognition by Fu-Tien Hsiao and Vedrana Andersen.
Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring /9/2004.
Speech in Multimedia Hao Jiang Computer Science Department Boston College Oct. 9, 2007.
Automatic Lip- Synchronization Using Linear Prediction of Speech Christopher Kohnert SK Semwal University of Colorado, Colorado Springs.
EE513 Audio Signals and Systems LPC Analysis and Speech Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Frequency Domain Analysis/Synthesis Concerned with the reproduction of the frequency spectrum within the speech waveform Less concern with amplitude.
Speech and Audio Processing and Recognition
Speech Coding Nicola Orio Dipartimento di Ingegneria dell’Informazione IV Scuola estiva AISV, 8-12 settembre 2008.
Overview of Adaptive Multi-Rate Narrow Band (AMR-NB) Speech Codec
Pole Zero Speech Models Speech is nonstationary. It can approximately be considered stationary over short intervals (20-40 ms). Over thisinterval the source.
EE2F1 Speech & Audio Technology Sept. 26, 2002 SLIDE 1 THE UNIVERSITY OF BIRMINGHAM ELECTRONIC, ELECTRICAL & COMPUTER ENGINEERING Digital Systems & Vision.
Frequency Response of Discrete-time LTI Systems Prof. Siripong Potisuk.
Voice Transformations Challenges: Signal processing techniques have advanced faster than our understanding of the physics Examples: – Rate of articulation.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
EE Audio Signals and Systems Psychoacoustics (Masking) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
EE513 Audio Signals and Systems Noise Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Over-Sampling and Multi-Rate DSP Systems
Speech Signal Processing I Edmilson Morais and Prof. Greg. Dogil October, 25, 2001.
EE513 Audio Signals and Systems Digital Signal Processing (Systems) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Multimedia Specification Design and Production 2013 / Semester 2 / week 3 Lecturer: Dr. Nikos Gazepidis
Speech Coding Using LPC. What is Speech Coding  Speech coding is the procedure of transforming speech signal into more compact form for Transmission.
Page 0 of 23 MELP Vocoders Nima Moghadam SN#: Saeed Nari SN#: Supervisor Dr. Saameti April 2005 Sharif University of Technology.
ECE 598: The Speech Chain Lecture 7: Fourier Transform; Speech Sources and Filters.
Chapter 16 Speech Synthesis Algorithms 16.1 Synthesis based on LPC 16.2 Synthesis based on formants 16.3 Synthesis based on homomorphic processing 16.4.
Speech Coding Submitted To: Dr. Mohab Mangoud Submitted By: Nidal Ismail.
EE Audio Signals and Systems Digital Signal Processing (Synthesis) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
Compression No. 1  Seattle Pacific University Data Compression Kevin Bolding Electrical Engineering Seattle Pacific University.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
Structure of Spoken Language
Submitted By: Santosh Kumar Yadav (111432) M.E. Modular(2011) Under the Supervision of: Mrs. Shano Solanki Assistant Professor, C.S.E NITTTR, Chandigarh.
EE513 Audio Signals and Systems Complex Oscillator Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
EE Audio Signals and Systems Linear Prediction Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
VOCODERS. Vocoders Speech Coding Systems Implemented in the transmitter for analysis of the voice signal Complex than waveform coders High economy in.
EE513 Audio Signals and Systems
EE Audio Signals and Systems Room Acoustics Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Vocal Tract & Lip Shape Estimation By MS Shah & Vikash Sethia Supervisor: Prof. PC Pandey EE Dept, IIT Bombay AIM-2003, EE Dept, IIT Bombay, 27 th June,
More On Linear Predictive Analysis
SPEECH CODING Maryam Zebarjad Alessandro Chiumento Supervisor : Sylwester Szczpaniak.
EE Audio Signals and Systems Wave Basics Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
Linear Prediction.
Digital Signal Processing Lecture 6 Frequency Selective Filters
1 Speech Compression (after first coding) By Allam Mousa Department of Telecommunication Engineering An Najah University SP_3_Compression.
EE422G Signals and Systems Laboratory Fourier Series and the DFT Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
EE599-2 Audio Signals and Systems
Figure 11.1 Linear system model for a signal s[n].
Vocoders.
Linear Prediction Simple first- and second-order systems
EE Audio Signals and Systems
EE513 Audio Signals and Systems
Linear Prediction.
1 Vocoders. 2 The Channel Vocoder (analyzer) : The channel vocoder employs a bank of bandpass filters,  Each having a bandwidth between 100 HZ and 300.
Linear Predictive Coding Methods
Mobile Systems Workshop 1 Narrow band speech coding for mobile phones
The Vocoder and its related technology
Linear Prediction.
EE Audio Signals and Systems
Speech Processing Final Project
Presentation transcript:

EE Audio Signals and Systems Speech Production Kevin D. Donohue Electrical and Computer Engineering University of Kentucky

Related Web Sites Speech modeling is a very popular topic. Many web sites are devoted to education and research in this area. A general search of speech production, modeling, synthesis, analysis … will turn up many interesting web sites. A few examples are given below:

Speech Generation Speech can be divided into fundamental building blocks of sounds referred to as phonemes. All sounds results from turbulence through obstructed air flow The vocal cords create quasi-periodic obstructions of air flow as a sound source at the base of the vocal tract. Phonemes associated with the vocal cord are referred to as voiced speech. Single shot turbulence from obstructed air flow through the vocal tract is primarily generated by the teeth, tongue and lips. Phonemes associated with with non-periodic obstructed air flow are referred to as unvoiced speech. Taken from

Speech Production Models The general speech model: Sources can be modeled as quasi periodic impulse trains or random sequences of impulse. Vocal tract filter can be modeled as an all-pole filter related to the tract resonances. The radiator can be modeled as a simple gain with spatial direction (possibly some filtering) Unvoiced Speech Quasi-Periodic Pulsed Air Air Burst or Continuous flow Voiced Speech Vocal Tract Filter Vocal Radiator

Example Create an “a” sound (as the “a” in about or “u” in hum) and use the LPC command to model this sound being generated from a quasi- periodic sequence of impulses exciting an all pole filter. The LPC command finds a vector of filter coefficients (a) such that prediction error is minimized: Predict x(n) from previous samples: Compute prediction error sequence with: Use Z-transforms to find transfer function of filter that recovers x(n) from the LPCs and error sequence e(n). Identify the components related to source, vocal tract, and radiator.

Script for Analysis [y,fs] = wavread('aaaaa.wav'); % Read in wave file [cb,ca] = butter(5,2*60/fs,'high'); % Filter to remove LF recording noise yf = filtfilt(cb,ca,y); [a,er] = lpc(yf,10); % Compute LPC coefficent with model order 10 predy = filter(a,1,yf); % Compute prediction error with all zero filter figure(1) ; plot(predy); title('Prediction error'); xlabel('Samples'); ylabel('Amplitude') recon = filter(1,a,predy); % Compute reconstructed signal from error and all-pole filter figure(2) % Plot reconstructed signal plot(recon,'b') hold on % Plot with original delayed by a unit so it does not entirely overlap the perfectly reconstructed signal plot(yf(2:end),'r') hold off % By examining a the error sequence, generate a simple impulse sequence to simulate its period (about 103 sample period) g = []; for k=1:150 g = [g, 1, zeros(1,103)]; end % Run simulated error sequence through all pole filter sim = filter(1,a,g); soundsc([(sim')/std(sim); yf/std(yf)],fs) % Play sounds compare simulated with real

Script for Analysis % Plot pole zero diagram figure(3) r = (roots(a)) w = [0:.001:2*pi]; plot(real(r),imag(r),'xr',real(exp(j*w)),imag(exp(j*w)),'b'); title('Pole diagram of vocal tract filter') xlabel('Real'); ylabel('Imaginary') % Find resonant frequencies corresponding to poles froots = (fs/2)*angle(r)/pi; nf = find(froots > 0 & froots < fs/2); % Find those corresponding to complex conjugate poles figure(4) % Examine average specturm with formant frequencies [pd,f] = psd(yf,4*1024,fs,hamming(2*1024),256); dbspec = 20*log10(pd); mxp = max(dbspec); % Find max and min points for graphing verticle lines mnp = min(dbspec); plot(f,dbspec,'b') % Plot PSD hold on % Over lines on plot where formant frequencies were estimated from LPCs for k=1:length(nf) plot([froots(nf(k)), froots(nf(k))], [mnp(1), mxp(1)], 'k--') end hold off title('PSD plot with formant frequencies (Black broken lines)'); xlabel('Hertz'); ylabel('dB')

LPC Analysis Result Pole Frequencies of LPC model from vocal tract shape Frequency periodicities from harmonics of Pitch frequency

Vocal Tract Filter Implementations Direct form 1 for all pole model: z -1 … +

Vocal Tract Filter Implementations Direct form 1, second order sections: … z

Vocal Tract Filter Implementations Lattice implementation are popular because of error and stability properties. The filter is implement in modular stages with coefficients directly related to stability criterion and tube resonances of the vocal tract : z

Example a)Record a neutral vowel sound, estimate the formant frequencies, and estimate the size of the vocal tract based on a 341 m/s speed of sound and assume an open-at-one- end tube model. b) Use LPCs estimated from the neutral vowel sound, to filter another sample of speech from the same speaker. Use it as an all zero filter and then as an all pole filter. Listen to the sound and describe what is happening. c)Convert the LPC coefficients for all-pole filter into a second order section and implement filter. Describe advantages of this approach. d)Modify the filter by maintaining the angle of the poles/zeros but move their magnitudes closer to the unit circle. Listen to the sound and explain what is happening.

Homework (1) a) Record a free vowel sound and estimate the size of your vocal tract based on the formant frequencies. b) Compute the LPCs from a free vowel sound and use the LPCs to filter another segment of speech with –10dB of white noise added. Use the LPCs as an all-zero filter and as an all-pole filter. Describe the sound of the filtered outputs and explain what is happening between the 2 filters. Extra credit (1 point), move the poles and zeros further away from the unit circles and repeat part b). Describe the effect on the filtered sound when pole and zeros are moved away from the unit circle. Submit this description and the mfiles used to process the data.