Download presentation
Presentation is loading. Please wait.
1
Language Transfer of Audio Word2Vec:
Learning Audio Segment Representations without Target Language Data Introduction: 4:00 Training: 2:30 Language transfer: 6:30 STD: 5:30 Speaker: Hung-Yi Lee Chia-Hao Shen, Janet Y. Sung, Hung-Yi Lee
2
Outline Introduction Training of Audio Word2vec Language Transfer
Application to Query-by-example Spoken Term Detection (STD) Concluding Remarks 0:50
3
Audio Word to Vector Model Model Model Model
word-level audio segment Model Model Model Model As its name implies Learn from lots of audio without annotation
4
Audio Word to Vector The audio segments corresponding to words with similar pronunciations are close to each other. dog never dog never dogs 1:00 never ever ever
5
Language Transfer Model ?
Not included in training audio Language X Model unsupervised :39 ? Audio collection without annotation Can we train an universal model that can be applied even on the unknown languages?
6
Language Transfer Why consider universal model for all languages?
If you want to apply the model on language X, why don’t you simply train a model by the audio of language X. Many audio files are code-switched across several different languages. Applied the model on audio data on the Internet in hundreds of languages. Audio collection for model training may not cover all the languages. It would be beneficial to have an universal model. 1:30 Star War C-3PO six million galactic languages Cannot Ewok language The Yuzzum language is closely associated with the Ewok language C-3PO first communicated with the Ewoks using the Yuzzum language, and gradually pieced together enough to learn the Ewok language sufficiently to be conversational. C-3PO learnt the Ewok language through observation. The 3PO-series protocol droids are equipped with a TranLang III communications module. It comes with up to six million galactic languages - common and obscure, organic and inorganic - at purchase. It also possessed phonetic pattern analysers that provides the capability to learn and translate new languages not in its existing database.
7
Outline Introduction Training of Audio Word2vec Language Transfer
Application to Query-by-example Spoken Term Detection Concluding Remarks
8
Audio Word to Vector Model Model Model Model word-level audio segment
There are lots of segmentation approaches. Model Model Model Model In the following discussion, assume that we already get the segmentation.
9
Sequence-to-sequence Auto-encoder
vector audio segment We use sequence-to-sequence auto-encoder here The training is unsupervised. RNN Encoder The vector we want (similar to the model used in speech summarization) x1 x2 x3 x4 acoustic features audio segment
10
Sequence-to-sequence Auto-encoder
Input acoustic features x1 x2 x4 x3 The RNN encoder and decoder are jointly trained. y1 y2 y3 y4 RNN Encoder RNN Decoder x1 x2 x3 x4 acoustic features audio segment
11
What does machine learn?
Text word to vector: Audio word to vector (phonetic information) 𝑉 𝑅𝑜𝑚𝑒 −𝑉 𝐼𝑡𝑎𝑙𝑦 +𝑉 𝐺𝑒𝑟𝑚𝑎𝑛𝑦 ≈𝑉 𝐵𝑒𝑟𝑙𝑖𝑛 𝑉 𝑘𝑖𝑛𝑔 −𝑉 𝑞𝑢𝑒𝑒𝑛 +𝑉 𝑎𝑢𝑛𝑡 ≈𝑉 𝑢𝑛𝑐𝑙𝑒 V( ) - V( ) + V( ) = V( ) 2:30 IT CAT CATS GIRL PEARL PEARLS GIRLS V( ) - V( ) + V( ) = V( ) IT CAT CATS ITS [Chung, Wu, Lee, Lee, Interspeech 16)
12
Outline Introduction Training of Audio Word2vec Language Transfer
Application to Query-by-example Spoken Term Detection Concluding Remarks
13
Language Transfer We train sequence-to-sequence auto-encoder on a source language with a large amount of data. Training Apply RNN encoder on a new language. source language source language RNN Encoder RNN Decoder trained by source language Testing target language vector representation z for target language RNN Encoder
14
Experimental Setup Using 1-layer GRU as encoder and decoder
Training with SGD. Initial learning rate was 1 and decayed with a factor of 0.95 every 500 batches. Acoustic features: 39-dim MFCC We used forced alignment with reference transcriptions to obtain word boundaries. The results is kind of oracle. We address this issue in another ICASSP paper.
15
Experimental Setup - Corpus
English is our source language, while the other languages are target languages. English: Librispeech Training data: 2.2M word-level audio segments Testing data: 250K audio segments French, German, Czech and Spanish: GlobalPhone Testing data: 20K audio segments
16
Phonetic Information ever never RNN RNN Encoder Encoder
Edit Distance between Phoneme sequences EH V ER N EH V ER ever never =1 RNN Encoder RNN Encoder Cosine Similarity
17
Phonetic Information Model trained on English, tested on English
Variance Larger phoneme sequence edit distance, smaller cosine similarity Cosine Similarity The same pronunciation very different pronunciation Phoneme Sequence Edit Distance
18
Phonetic Information Model trained on English, tested on other languages Audio word2vec still capture phonetic information even though the model has never heard the language. Cosine Similarity Phoneme Sequence Edit Distance:
19
Visualization Visualizing embedding vectors of each word RNN Encoder
day Project to 2-D RNN Encoder day average RNN Encoder day
20
Visualization Learn on English, and apply on the other languages
6:30 左邊法文 右邊德文 French German
21
Outline Introduction Training of Audio Word2vec Language Transfer
Application to Query-by-example Spoken Term Detection (STD) Concluding Remarks
22
Query-by-example Spoken Term Detection
“ICASSP” spoken query user “ICASSP” “ICASSP” We have to mention some evolutions here!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!????????????????????? Also known as unsupervised spoken term detection, zero-resource spoken content retrieval, etc. Evaluation program Spoken Content Compute similarity between spoken queries and audio files on acoustic level, and find the query term
23
Query-by-example Spoken Term Detection
Segmental DTW [Zhang, ICASSP 10], Subsequence DTW [Anguera, ICME 13][Calvo, MediaEval 14] DTW for query-by-example Adding slope-constraints [Chan & Lee, Interspeech 10] The blue path is better than the green one. Spoken Query Utterance
24
Query-by-example Spoken Term Detection
Much faster than DTW Audio archive divided into variable-length audio segments Off-line Audio Word to Vector Spoken Query Audio Word to Vector Similarity On-line Search Result
25
Query-by-Example STD Baseline: Naïve Encoder [Tu & Lee, ASRU 11]
[I.-F. Chen, Interspeech 13] … … … … … … … …
26
Query-by-Example STD ─ English
1K queries to retrieve 250K audio segments DTW? Audio Word2vec Naïve Encoder DTW is not tractable here due to computing issue The evaluation measure is Mean Average Precision (MAP) (the larger, the better).
27
Query-by-Example STD ─ Language Transfer
: Naïve Encoder : Audio word2vec by target language (4K segments) 1K queries to retrieve 20K audio segments FRE GRE CZE ESP The performance based on audio word2vec is poor with limited training data.
28
Query-by-Example STD ─ Language Transfer
: Naïve Encoder : Audio word2vec by target language (4K segments) : Audio word2vec by English (2.2M segments) FRE GRE CZE ESP Audio word2vec learned by English can be directly applied on French and German.
29
Query-by-Example STD ─ Language Transfer
: Naïve Encoder : Audio word2vec by target language (4K segments) : Audio word2vec by English (2.2M segments) : Audio word2vec by English + target (2K segments) 5:30 FRE GRE CZE ESP Fine-tuning English model by target language is helpful.
30
Outline Introduction Training of Audio Word2vec Language Transfer
Application to Query-by-example Spoken Term Detection Concluding Remarks
31
Concluding Remarks We verify the capability of language transfer of Audio Word2Vec. Audio Word2Vec learned from English captures the phonetic information of other languages. In Query-by-example STD, Audio Word2Vec learned from English outperformed the baselines on French and German.
32
SEGMENTAL AUDIO WORD2VEC
Session: Spoken Language Acquisition and Retrieval Time: Wednesday, April 18, 16: :00
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.