Ala’a Spaih Abeer Abu-Hantash Directed by Dr.Allam Mousa Text-Independent Speaker Identification System Ala’a Spaih Abeer Abu-Hantash Directed by Dr.Allam Mousa
Outline for Today 1. 2. 3. 4. 5. Speaker Recognition Field System Overview 2. MFCC & VQ 3. Experimental Results 4. Live Demo 5.
Speaker Recognition Field Speaker Verification Speaker Identification Text Dependent Text Independent Text Dependent Text Independent
System Overview Speaker modeling Speaker Model Database Feature Training mode Speaker Model Database Speaker modeling Feature extraction Feature Matching Testing Mode Speech input Speaker ID Decision Logic
Feature Extraction Feature extraction:is a special form of dimensionality reduction. The aim: is to extract the formants.
Feature Extraction The extracted features must have specific characteristics: Easily measurable, occur naturally and frequently in speech. Not change over time. Vary as much among speakers, consistent for each speaker. Not affected by: speaker health, background noise. Many algorithms to extract them: LPC,LPCC,HFCC,MFCC. We used Mel Frequency Cepstral Coefficients algorithm: MFCC.
Feature Extraction Using MFCC Input speech Framing and windowing Fast Fourier transform Absolute value Mel scaled-filter bank Log Feature vectors Discrete cosine transform
Framing And Windowing FFT Spectrum Glottal pulse Vocal tract
Mel Scaled-Filter Bank Spectrum Mel spectrum mel(f)= 2595*log10(1+f/700)
Cepstrum Mel spectrum MFCC Coeff. DCT of the logarithm of the magnitude spectrum, the glottal pulse and the impulse response can be separated.
Classification Classification, that is to build a unique model for each speaker in the database. Two major types of models for classification. Stochastic models: GMM,HMM,ANN Template models: VQ , DTW We used VQ algorithm.
Clustered into codewords VQ Algorithm The VQ technique consists of extracting a small number of representative feature vectors. The first step is to build a speaker-database consisting of N codebooks, one for each speaker in the database. Clustered into codewords Speaker model (codebook) Speaker Feature vectors This done by K-means Clustering algorithm
K-means Clustering start No. of clusters k No yes centroids No change End Distance objects to centroids Grouping based on minimum distance
VQ Example Given data points, split into 4 codebook vectors with initial values at (2,2),(4,6),(6,5),(8,8).
VQ Example Once there’s no more change, the feature space will be partitioned into 4 regions. Any input feature can be classified as belonging to one of the 4 regions. The entire codebook can be specified by the 4 centroid points.
K-means Clustering If we set the codebook size to 8 then the output of the clustering will be: VQ MFCC’s of a speaker (1000x12) Speaker Codebook (8x12)
Feature Matching For each codebook a distortion measure is computed. The speaker with the lowest distortion is chosen. Define the distortion measure Euclidean distance.
System Operates In Two Modes Offline Online Monitoring Microphone Inputs MFCC Feature Extraction Calculate VQ Distortion Make Decision & Display
Applications Speaker Recognition for Authentication. Banking application. Forensic Speaker Recognition Proving the identity of a recorded voice can help to convict a criminal or discharge an innocent in court. Speaker Recognition for Surveillance. Electronic eavesdropping of telephone and radio conversations.
Results 12 MFCC, 29 Filter banks, 64 Codebook size … ELSDSR database. To show how the system identify the speaker according to Euclidean distance calculation. Sp 1 Sp 2 Sp 3 Sp 4 Sp 5 10.7492 13.2712 17.8646 14.7885 13.2859 13.2364 10.2740 13.2884 11.7941 14.0461 17.5438 16.1177 11.9029 16.2916 17.7199 16.1360 13.7095 15.5633 11.7528 16.7327 14.9324 15.7028 17.2842 17.8917 12.3504
Results Frame Size Vs. ID rate. No. of MFCC ID Rate 5 76 % 12 91 % 20 Number of MFCC Vs. ID rate. Frame Size Vs. ID rate. Frame size(10-30) ms Good No. of MFCC ID Rate 5 76 % 12 91 % 20 Above 30 ms Bad
Results The effect of the codebook size on the ID rate & VQ distortion.
Results Number of filter-banks Vs. ID rate & VQ distortion.
Results The performance of the system on different test shot lengths. Test speech length ID Rate 0.2 sec 60 % 2 sec 85 % 6 sec 90 % 10 sec 95 %
Summary Effect of changing some parameters on: MFCC algorithm. VQ algorithm. Our system identify the speaker regardless of the language and the text. Satisfied results: The same training and testing environment. Test data needs to be several ten seconds.
Thank You