Download presentation
Presentation is loading. Please wait.
Published byOphelia Hill Modified over 9 years ago
1
Handing Uncertain Observations in Unsupervised Topic-Mixture Language Model Adaptation Ekapol Chuangsuwanich 1, Shinji Watanabe 2, Takaaki Hori 2, Tomoharu Iwata 2, James Glass 1 報告者:郝柏翰 2013/03/05 ICASSP 2012 1 MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA 2 NTT Communication Science Laboratories, NTT Corporation, Japan
2
Outline Introduction Topic Tracking Language Model(TTLM) TTLM Using Confusion Network Inputs(TTLMCN) Experiments Conclusion 2
3
Introduction In a real environment, acoustic and language features often vary depending on the speakers, speaking styles and topic changes. To accommodate these changes, speech recognition approaches that include the incremental tracking of changing environments have attracted attention. This paper proposes a topic tracking language model that can adaptively track changes in topics based on current text information and previously estimated topic models in an on-line manner. 3
4
TTLM Tracking temporal changes in language environments 4
5
TTLM A long session of speech input is divided into chunks Each chunk is modeled by different topic distributions The current topic distribution depends on the topic distribution of the past H chunks and precision parameters α as follows: 5
6
TTLM With the topic distribution, the unigram probability of a word w m in the chunk can be recovered using the topic and word probabilities Where θ is the unigram probabilities of word w m in topic k The adapted n-gram can be used for a 2nd pass recognition for better results. 6
7
TTLMCN Consider a confusion network with M word slots. Each word slot m can contain different number of arcs A m with each arc containing a word w ma and a corresponding arc posterior d ma. S m is binary selection parameter, where s m = 1 indicates that the arc is selected. 7 chunk 1 chunk 2 chunk 3 slot 1 slot 2 slot 3 A 1 =3 …
8
TTLMCN 8 For each chunk t, we can write the joint distribution of words, latent topics and arc selections conditioned on the topic probabilities, unigram probabilities, and arc posteriors as follows:
9
TTLMCN Graphical representation of TTLMCN 9
10
Experiments(MIT-OCW) MIT-OCW is mainly composed of lectures given at MIT. Each lecture is typically two hours long. We segmented the lectures using Voice Activity Detectors into utterances averaging two seconds each. 10
11
Compare with TTLM and TTLMCN We can see that the topic probability of TTLMCNI is more similar to the oracle experiment than TTLM, especially in the low probability regions. KL between TTLM and ORACLE was 3.3, TTLMCN was 1.3 11
12
Conclusion We described an extension for the TTLM in order to handle errors in speech recognition. The proposed model used a confusion network as input instead of just one ASR hypothesis which improved performance even in high WER situations. The gain in word error rate was not very large since the LM typically contributed little to the performance of LVCSR. 12
13
Significance Test (T-Test) H0 :實驗組與對照組的常態分佈一致 H1 :實驗組與對照組的常態分佈不一致 13
14
Significance Test (T-Test) 14 Example X57535339 Y81466412
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.