Context and Learning in Multilingual Tone and Pitch Accent Recognition Gina-Anne Levow University of Chicago May 18, 2007.

Slides:



Advertisements
Similar presentations
1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
Advertisements

Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Sub-Project I Prosody, Tones and Text-To-Speech Synthesis Sin-Horng Chen (PI), Chiu-yu Tseng (Co-PI), Yih-Ru Wang (Co-PI), Yuan-Fu Liao (Co-PI), Lin-shan.
Machine learning continued Image source:
IBM Labs in Haifa © 2007 IBM Corporation SSW-6, Bonn, August 23th, 2007 Maximum-Likelihood Dynamic Intonation Model for Concatenative Text to Speech System.
Outlines  Objectives  Study of Thai tones  Construction of contextual factors  Design of decision-tree structures  Design of context clustering.
Mandarin Chinese Speech Recognition. Mandarin Chinese Tonal language (inflection matters!) Tonal language (inflection matters!) 1 st tone – High, constant.
AN ACOUSTIC PROFILE OF SPEECH EFFICIENCY R.J.J.H. van Son, Barbertje M. Streefkerk, and Louis C.W. Pols Institute of Phonetic Sciences / ACLC University.
Automatic Prosodic Event Detection Using Acoustic, Lexical, and Syntactic Evidence Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan IEEE 2007 Min-Hsuan.
Analyzing Students’ Pronunciation and Improving Tonal Teaching Ropngrong Liao Marilyn Chakwin Defense.
Niebuhr, D‘Imperio, Gili Fivela, Cangemi 1 Are there “Shapers” and “Aligners” ? Individual differences in signalling pitch accent category.
Unsupervised and Semi-Supervised Learning of Tone and Pitch Accent Gina-Anne Levow University of Chicago June 6, 2006.
Connecting Acoustics to Linguistics in Chinese Intonation Greg Kochanski (Oxford Phonetics) Chilin Shih (University of Illinois) Tan Lee (CUHK) with Hongyan.
Prosody in Spoken Language Understanding Gina Anne Levow University of Chicago January 4, 2008 NLP Winter School 2008.
Modeling Prosodic Sequences with K-Means and Dirichlet Process GMMs Andrew Rosenberg Queens College / CUNY Interspeech 2013 August 26, 2013.
Combining Prosodic and Text Features for Segmentation of Mandarin Broadcast News Gina-Anne Levow University of Chicago SIGHAN July 25, 2004.
Identifying Local Corrections in Human-Computer Dialogue Gina-Anne Levow University of Chicago October 5, 2004.
Prosodic Cues to Discourse Segment Boundaries in Human-Computer Dialogue SIGDial 2004 Gina-Anne Levow April 30, 2004.
SPOKEN LANGUAGE SYSTEMS MIT Computer Science and Artificial Intelligence Laboratory Mitchell Peabody, Chao Wang, and Stephanie Seneff June 19, 2004 Lexical.
Automatic Prosody Labeling Final Presentation Andrew Rosenberg ELEN Speech and Audio Processing and Recognition 4/27/05.
Context in Multilingual Tone and Pitch Accent Recognition Gina-Anne Levow University of Chicago September 7, 2005.
On the Correlation between Energy and Pitch Accent in Read English Speech Andrew Rosenberg, Julia Hirschberg Columbia University Interspeech /14/06.
On the Correlation between Energy and Pitch Accent in Read English Speech Andrew Rosenberg Weekly Speech Lab Talk 6/27/06.
Turn-taking in Mandarin Dialogue: Interactions of Tone and Intonation Gina-Anne Levow University of Chicago October 14, 2005.
Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford.
Improved Tone Modeling for Mandarin Broadcast News Speech Recognition Xin Lei 1, Manhung Siu 2, Mei-Yuh Hwang 1, Mari Ostendorf 1, Tan Lee 3 1 SSLI Lab,
Intonation September 18, 2014 The Plan for Today Also: I have posted a couple of readings on TOBI (an intonation transcription system) to the course.
Toshiba Update 04/09/2006 Data-Driven Prosody and Voice Quality Generation for Emotional Speech Zeynep Inanoglu & Steve Young Machine Intelligence Lab.
Unsupervised Learning of Categories from Sets of Partially Matching Image Features Kristen Grauman and Trevor Darrel CVPR 2006 Presented By Sovan Biswas.
A Phonotactic-Semantic Paradigm for Automatic Spoken Document Classification Bin MA and Haizhou LI Institute for Infocomm Research Singapore.
This week: overview on pattern recognition (related to machine learning)
9 th Conference on Telecommunications – Conftele 2013 Castelo Branco, Portugal, May 8-10, 2013 Sara Candeias 1 Dirce Celorico 1 Jorge Proença 1 Arlindo.
Richard Socher Cliff Chiung-Yu Lin Andrew Y. Ng Christopher D. Manning
Whither Linguistic Interpretation of Acoustic Pronunciation Variation Annika Hämäläinen, Yan Han, Lou Boves & Louis ten Bosch.
Smart RSS Aggregator A text classification problem Alban Scholer & Markus Kirsten 2005.
Copyright 2007, Toshiba Corporation. How (not) to Select Your Voice Corpus: Random Selection vs. Phonologically Balanced Tanya Lambert, Norbert Braunschweiler,
 Text Representation & Text Classification for Intelligent Information Retrieval Ning Yu School of Library and Information Science Indiana University.
On Speaker-Specific Prosodic Models for Automatic Dialog Act Segmentation of Multi-Party Meetings Jáchym Kolář 1,2 Elizabeth Shriberg 1,3 Yang Liu 1,4.
A Weakly-Supervised Approach to Argumentative Zoning of Scientific Documents Yufan Guo Anna Korhonen Thierry Poibeau 1 Review By: Pranjal Singh Paper.
Crowdsourcing for Spoken Dialogue System Evaluation Ling 575 Spoken Dialog April 30, 2015.
Yun-Nung (Vivian) Chen, Yu Huang, Sheng-Yi Kong, Lin-Shan Lee National Taiwan University, Taiwan.
Managing Ambiguity, Gaps, and Errors in Spoken Language Processing Gina-Anne Levow May 14, 2009.
A Bootstrapping Method for Building Subjectivity Lexicons for Languages with Scarce Resources Author: Carmen Banea, Rada Mihalcea, Janyce Wiebe Source:
Data Sampling & Progressive Training T. Shinozaki & M. Ostendorf University of Washington In collaboration with L. Atlas.
Bernd Möbius CoE MMCI Saarland University Lecture 7 8 Dec 2010 Unit Selection Synthesis B Möbius Unit selection synthesis Text-to-Speech Synthesis.
Evaluating prosody prediction in synthesis with respect to Modern Greek prenuclear accents Elisabeth Chorianopoulou MSc in Speech and Language Processing.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
Automatic Cue-Based Dialogue Act Tagging Discourse & Dialogue CMSC November 3, 2006.
Recognizing Discourse Structure: Speech Discourse & Dialogue CMSC October 11, 2006.
H. Lexie Yang1, Dr. Melba M. Crawford2
National Taiwan University, Taiwan
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Iowa State University Department of Computer Science Center for Computational Intelligence, Learning, and Discovery Harris T. Lin, Sanghack Lee, Ngot Bui.
Arlindo Veiga Dirce Celorico Jorge Proença Sara Candeias Fernando Perdigão Prosodic and Phonetic Features for Speaking Styles Classification and Detection.
Virtual Examples for Text Classification with Support Vector Machines Manabu Sassano Proceedings of the 2003 Conference on Emprical Methods in Natural.
Phone-Level Pronunciation Scoring and Assessment for Interactive Language Learning Speech Communication, 2000 Authors: S. M. Witt, S. J. Young Presenter:
Pitch Tracking + Prosody January 19, 2012 Homework! For Tuesday: introductory course project report Background information on your consultant and the.
Cross-Dialectal Data Transferring for Gaussian Mixture Model Training in Arabic Speech Recognition Po-Sen Huang Mark Hasegawa-Johnson University of Illinois.
Yow-Bang Wang, Lin-Shan Lee INTERSPEECH 2010 Speaker: Hsiao-Tsung Hung.
Multi-Class Sentiment Analysis with Clustering and Score Representation Yan Zhu.
KNN & Naïve Bayes Hongning Wang
A Text-free Approach to Assessing Nonnative Intonation Joseph Tepperman, Abe Kazemzadeh, and Shrikanth Narayanan Signal Analysis and Interpretation Laboratory,
Investigating Pitch Accent Recognition in Non-native Speech
Tone in Sherpa (Sino-Tibetan) Joyce McDonough1, Rebecca Baier2 and
Ju Lin, Yanlu Xie, Yingming Gao, Jinsong Zhang
MAS 622J Course Project Classification of Affective States - GP Semi-Supervised Learning, SVM and kNN Hyungil Ahn
Low Level Cues to Emotion
Automatic Prosodic Event Detection
Presentation transcript:

Context and Learning in Multilingual Tone and Pitch Accent Recognition Gina-Anne Levow University of Chicago May 18, 2007

Roadmap Challenges for Tone and Pitch Accent –Contextual effects –Training demands Modeling Context for Tone and Pitch Accent –Data collections & processing –Integrating context –Context in Recognition Asides: More tones and features Reducing Training Demands –Data collections & structure –Semi-supervised learning –Unsupervised clustering Conclusion

Challenges: Context Tone and Pitch Accent Recognition –Key component of language understanding Lexical tone carries word meaning Pitch accent carries semantic, pragmatic, discourse meaning –Non-canonical form (Shen 90, Shih 00, Xu 01) Tonal coarticulation modifies surface realization –In extreme cases, fall becomes rise –Tone is relative To speaker range –High for male may be low for female To phrase range, other tones –E.g. downstep

Challenges: Training Demands Tone and pitch accent recognition –Exploit data intensive machine learning SVMs (Thubthong 01,Levow 05, SLX05) Boosted and Bagged Decision trees (X. Sun, 02) HMMs: (Wang & Seneff 00, Zhou et al 04, Hasegawa-Johnson et al, 04,… –Can achieve good results with huge sample sets SLX05: ~10K lab syllabic samples -> > 90% accuracy –Training data expensive to acquire Time – pitch accent 10s of times real-time Money – requires skilled labelers Limits investigation across domains, styles, etc –Human language acquisition doesn’t use labels

Strategy: Overall Common model across languages –Common machine learning classifiers –Acoustic-prosodic model No word label, POS, lexical stress info No explicit tone label sequence model –English, Mandarin Chinese, isiZulu (also Cantonese)

Strategy: Context Exploit contextual information –Features from adjacent syllables Height, shape: direct, relative –Compensate for phrase contour –Analyze impact of Context position, context encoding, context type > 12.5% reduction in error over no context

Data Collections: I English: (Ostendorf et al, 95) –Boston University Radio News Corpus, f2b –Manually ToBI annotated, aligned, syllabified –Pitch accent aligned to syllables Unaccented, High, Downstepped High, Low –(Sun 02, Ross & Ostendorf 95)

Data Collections: II Mandarin: –TDT2 Voice of America Mandarin Broadcast News –Automatically force aligned to anchor scripts Automatically segmented, pinyin pronunciation lexicon Manually constructed pinyin-ARPABET mapping CU Sonic – language porting –High, Mid-rising, Low, High falling, Neutral

Data Collections: III isiZulu: (Govender et al., 2005) –Sentence text collected from Web Selected based on grapheme bigram variation –Read by male native speaker –Manually aligned, syllabified –Tone labels assigned by 2 nd native speaker Based only on utterance text –Tone labels: High, low

Local Feature Extraction Uniform representation for tone, pitch accent –Motivated by Pitch Target Approximation Model Tone/pitch accent target exponentially approached –Linear target: height, slope (Xu et al, 99) Base features: –Pitch, Intensity max, mean, min, range (Praat, speaker normalized) –Pitch at 5 points across voiced region –Duration –Initial, final in phrase Slope: –Linear fit to last half of pitch contour

Context Features Local context: –Extended features Pitch max, mean, adjacent points of preceding, following syllables –Difference features Difference between –Pitch max, mean, mid, slope –Intensity max, mean Of preceding, following and current syllable Phrasal context: –Compute collection average phrase slope –Compute scalar pitch values, adjusted for slope

Classification Experiments Classifier: Support Vector Machine –Linear kernel –Multiclass formulation SVMlight (Joachims), LibSVM (Cheng & Lin 01) –4:1 training / test splits Experiments: Effects of –Context position: preceding, following, none, both –Context encoding: Extended/Difference –Context type: local, phrasal

Results: Local Context ContextMandarin ToneEnglish Pitch Accent isiZulu Tone Full74.5%81.3%75.9% Extend PrePost74%80.7%73.8% Extend Pre74%79.9%73.6% Extend Post70.5%76.7%72.3% Diffs PrePost75.5%80.7%75.8% Diffs Pre76.5%79.5%75.5% Diffs Post69%77.3%72.8% Both Pre76.5%79.7%75.5% Both Post71.5%77.6%72.5% No context68.5%75.9%72.2%

Results: Local Context ContextMandarin ToneEnglish Pitch Accent isiZulu Tone Full74.5%81.3%75.9% Extend PrePost74%80.7%73.8% Extend Pre74%79.9%73.6% Extend Post70.5%76.7%72.3% Diffs PrePost75.5%80.7%75.8% Diffs Pre76.5%79.5%75.5% Diffs Post69%77.3%72.8% Both Pre76.5%79.7%75.5% Both Post71.5%77.6%72.5% No context68.5%75.9%72.2%

Results: Local Context ContextMandarin ToneEnglish Pitch Accent isiZulu Tone Full74.5%81.3%75.9% Extend PrePost74%80.7%73.8% Extend Pre74%79.9%73.6% Extend Post70.5%76.7%72.3% Diffs PrePost75.5%80.7%75.8% Diffs Pre76.5%79.5%75.5% Diffs Post69%77.3%72.8% Both Pre76.5%79.7%75.5% Both Post71.5%77.6%72.5% No context68.5%75.9%72.2%

Results: Local Context ContextMandarin ToneEnglish Pitch Accent Full74.5%81.3% Extend PrePost74%80.7% Extend Pre74%79.9% Extend Post70.5%76.7% Diffs PrePost75.5%80.7% Diffs Pre76.5%79.5% Diffs Post69%77.3% Both Pre76.5%79.7% Both Post71.5%77.6% No context68.5%75.9%

Discussion: Local Context Any context information improves over none –Preceding context information consistently improves over none or following context information English/isiZulu: Generally more context features are better Mandarin: Following context can degrade –Little difference in encoding (Extend vs Diffs) Consistent with phonetic analysis (Xu) that carryover coarticulation is greater than anticipatory

Results & Discussion: Phrasal Context Phrase ContextMandarin ToneEnglish Pitch Accent Phrase75.5%81.3% No Phrase72%79.9% Phrase contour compensation enhances recognition Simple strategy Use of non-linear slope compensate may improve

Context: Summary Employ common acoustic representation –Tone (Mandarin), pitch accent (English) Cantonese: ~64%; 68% with RBF kernel SVM classifiers - linear kernel: 76%, 81% Local context effects: –Up to > 20% relative reduction in error –Preceding context greatest contribution Carryover vs anticipatory Phrasal context effects: –Compensation for phrasal contour improves recognition

Context: Summary Employ common acoustic representation –Tone (Mandarin,isiZulu), pitch accent (English) SVM classifiers - linear kernel: 76%,76%, 81% Local context effects: –Up to > 20% relative reduction in error –Preceding context greatest contribution Carryover vs anticipatory Phrasal context effects: –Compensation for phrasal contour improves recognition

Aside: More Tones Cantonese: –CUSENT corpus of read broadcast news text –Same feature extraction & representation –6 tones: –High level, high rise, mid level, low fall, low rise, low level –SVM classification: Linear kernel: 64%, Gaussian kernel: 68% –3,6: 50% - mutually indistinguishable (50% pairwise) »Human levels: no context: 50%; context: 68% Augment with syllable phone sequence –86% accuracy: 90% of syllable w/tone 3 or 6: one dominates

Aside: Voice Quality & Energy w/ Dinoj Surendran Assess local voice quality and energy features for tone –Not typically associated with tones: Mandarin/isiZulu Considered: –VQ: NAQ, AQ, etc; Spectral balance; Spectral Tilt; Band energy Useful: Band energy significantly improves –Mandarin: neutral tone Supports identification of unstressed syllables –Spectral balance predicts stress in Dutch –isiZulu: Using band energy outperforms pitch In conjunction with pitch -> ~78%

Roadmap Challenges for Tone and Pitch Accent –Contextual effects –Training demands Modeling Context for Tone and Pitch Accent –Data collections & processing –Integrating context –Context in Recognition Reducing Training Demands –Data collections & structure –Semi-supervised learning –Unsupervised clustering Conclusion

Strategy: Training Challenge: –Can we use the underlying acoustic structure of the language – through unlabeled examples – to reduce the need for expensive labeled training data? Exploit semi-supervised and unsupervised learning –Semi-supervised Laplacian SVM –K-means and asymmetric k-lines clustering –Substantially outperform baselines Can approach supervised levels

Data Collections & Processing English: (as before) –Boston University Radio News Corpus, f2b Binary: Unaccented vs accented 4-way: Unaccented, High, Downstepped High, Low Mandarin: –Lab speech data: (Xu, 1999) 5 syllable utterances: vary tone, focus position –In-focus, pre-focus, post-focus –TDT2 Voice of America Mandarin Broadcast News –4-way: High, Mid-rising, Low, High falling isiZulu: (as before) –Read web sentences 2-way: High vs low

Semi-supervised Learning Approach: –Employ small amount of labeled data –Exploit information from additional – presumably more available –unlabeled data Few prior examples: several weakly supervised: (Wong et al, ’05) Classifier: –Laplacian SVM (Sindhwani,Belkin&Niyogi ’05) –Semi-supervised variant of SVM Exploits unlabeled examples –RBF kernel, typically 6 nearest neighbors, transductive

Experiments Pitch accent recognition: –Binary classification: Unaccented/Accented –1000 instances, proportionally sampled Labeled training: 200 unacc, 100 acc –80% accuracy (cf. 84% w/15x labeled SVM) Mandarin tone recognition: –4-way classification: n(n-1)/2 binary classifiers –400 instances: balanced; 160 labeled Clean lab speech- in-focus-94% – cf. 99% w/SVM, 1000s train; 85% w/SVM 160 training samples Broadcast news: 70% –Cf. < 50% w/SVM 160 training samples

Unsupervised Learning Question: –Can we identify the tone structure of a language from the acoustic space without training? Analogous to language acquisition Significant recent research in unsupervised clustering Established approaches: k-means Spectral clustering (Shi & Malik ‘97, Fischer & Poland 2004): asymmetric k-lines –Little research for tone Self-organizing maps (Gauthier et al,2005) –Tones identified in lab speech using f0 velocities Cluster-based bootstrapping (Narayanan et al, 2006) Prominence clustering (Tambourini ’05)

Clustering Pitch accent clustering: –4 way distinction: 1000 samples, proportional 2-16 clusters constructed –Assign most frequent class label to each cluster Classifier: –Asymmetric k-lines: »context-dependent kernel radii, non-spherical –> 78% accuracy: 2 clusters: asymmetric k-lines best –Context effects: Vector w/preceding context vs vector with no context comparable

Contrasting Clustering Contrasts: –Clustering: 3 Spectral approaches: –Perform spectral decomposition of affinity matrix »Asymmetric k-lines (Fischer & Poland 2004) »Symmetric k-lines (Fischer & Poland 2004) »Laplacian Eigenmaps (Belkin, Niyogi, & Sindhwani 2004) » Binary weights, k-lines clustering K-means: Standard Euclidean distance –# of clusters: 2-16 Best results: > 78% –2 clusters: asymmetric k-lines; > 2 clusters: kmeans Larger # clusters: all similar

Contrasting Learners

Tone Clustering: I Mandarin four tones: 400 samples: balanced 2-phase clustering: 2-5 clusters each Asymmetric k-lines, k-means clustering –Clean read speech: In-focus syllables: 87% (cf. 99% supervised) In-focus and pre-focus: 77% (cf. 93% supervised) –Broadcast news: 57% (cf. 74% supervised) –K-means requires more clusters to reach k-lines level

Tone Structure First phase of clustering splits high/rising from low/falling by slope Second phase by pitch height

Tone Clustering: II isiZulu High/Low tones 3225 samples: no labels Proportional: ~62% low, 38% high K-means clustering: 2 clusters –Read speech, web-based sentences 70% accuracy (vs 76% fully-supervised)

Conclusions Common prosodic framework for tone and pitch accent recognition –Contextual modeling enhances recognition Local context and broad phrase contour –Carryover coarticulation has larger effect for Mandarin –Exploiting unlabeled examples for recognition Semi- and Un-supervised approaches –Best cases approach supervised levels with less training –Exploits acoustic structure of tone and accent space

Current and Future Work Interactions of tone and intonation –Recognition of topic and turn boundaries –Effects of topic and turn cues on tone real’n Child-directed speech & tone learning Support for Computer-assisted tone learning Structured sequence models for tone –Sub-syllable segmentation & modeling Feature assessment –Band energy and intensity in tone recognition

Thanks Dinoj Surendran, Siwei Wang, Yi Xu Natasha Govender and Etienne Barnard V. Sindhwani, M. Belkin, & P. Niyogi; I. Fischer & J. Poland; T. Joachims; C-C. Cheng & C. Lin This work supported by NSF Grant #