Download presentation
Presentation is loading. Please wait.
Published byThomas Bond Modified over 9 years ago
1
research & development Component Score Weighting for GMM based Text-Independent Speaker Verification Liang Lu SNLP Unit, France Telecom R&D Beijing 2008-01-21 luliang07@gmail.com
2
research & development Outline Introduction Conventional LLR and Motivation for detailed score processing Component Score Weighting Experimental Results Conclusion
3
research & development Introduction State of the art GMM-UBM framework GMM based model construction Log-likelihood Ratio (LLR) based decision making Score Normalisation (Tnorm, Hnorm, etc) for robustesses
4
research & development Introduction Major challenges Limited data for speaker model training Mismatch between training and testing data
5
research & development Motivation for Component Score Weighting Motivation The insufficiency of training data and mismatch between training and testing condition make the mixtures in GMM different in discriminative capability The LLR just sum the score of each mixture without considering its reliability Does it helpful if LLR considers the discriminative capability of each mixture? Question If it does, how to explore the discriminative capabilities of Gaussian Component Mixtures
6
research & development Component Score Weighting Our Method First, scatter the LLR to each Gaussian mixture Where, the k-th mixture is dominant for frame, namely, Let we call is the dominant score and is the residual score
7
research & development in original LLR Component Score Weighting Extend the original LLR After doing this, the original LLR will be spitted into two score serials, dominant score serial and residual score serial Original: If we consider the discriminative capacity of each Gaussian mixture Extended:
8
research & development Component Score Weighting Now the question is: How can we know the discriminative capability of each Gaussian mixture and what the should be? Our assumption: We believe that the high dominant scores will have better discriminative capability and should be highlighted.
9
research & development Component Score Weighting Why the high dominant scores? If the test utterance is from the target speaker, then more components in GMM should get high value compared with UBM. If the utterance is form imposter, then high-valued components in GMM are hardly more UBM. If the test utterance is from the target speaker, the low-valued components in GMM is due to the mixtures are not well trained or mismatch exists between training and testing data.
10
research & development Component Score Weighting Restrained Emphasized We simply used an exponential function as the weighting function The residual scores have little importance and we ignore them finally. The final LLR score is as follows:
11
research & development Experimental Results systemEER (%)MinDCF (x100) GMM baseline7.644.16 GMM with CSW7.453.66 GMM with TNorm6.963.48 GMM with CSW&TNorm 7.143.10 Table: Results for GMM baseline and GMM with Component Score Weighting with TNorm Experiments are performed in the 1conv4w-1conv4w task of the 2006 NIST SRE corpora
12
research & development Conclusion Split the LLR score and consider the discriminative capacity of Gaussian mixtures is helpful to cope with the insufficiency of training data and mismatch between training and testing condition. The score weighting function should be coincident with the component score distribution and discriminative capacity. The exponential weighting function used in this investigation is not universal and also may not optimal. More work is needed to explore an optimal weighting function.
13
research & development
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.