Presentation is loading. Please wait.

Presentation is loading. Please wait.

Human Language Technologies – Text-to-Speech © 2007 IBM Corporation Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Automatic Exploration.

Similar presentations


Presentation on theme: "Human Language Technologies – Text-to-Speech © 2007 IBM Corporation Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Automatic Exploration."— Presentation transcript:

1

2 Human Language Technologies – Text-to-Speech © 2007 IBM Corporation Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Automatic Exploration of Corpus-Specific Properties for Expressive Text-to-Speech. (A Case Study in Emphasis.) Raul Fernandez and Bhuvana Ramabhadran I.B.M. T.J. Watson Research Center

3 Human Language Technologies – Text to Speech © 2007 IBM Corporation 2Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Motivation. Review of Expressive TTS Architecture Expression Mining: Emphasis. Evaluation. Outline

4 Human Language Technologies – Text to Speech © 2007 IBM Corporation 3Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Expressive TTS We have shown that corpus-based approaches to expressive CTTS manage to convey expressiveness if the corpus is well designed to contain the desired expression(s). There are, however, shortcomings to this approach: Adding new expressions, or increasing the size of the repository for an existing one, is expensive and time consuming. The footprint of the system increases as we add new expressions. Without abandoning this framework, we propose to partially address these limitations by an approach that exploits the properties of the existing databases to maximize the expressive range of the TTS system.

5 Human Language Technologies – Text to Speech © 2007 IBM Corporation 4Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Some observations about data and listeners…  Production variability:  Speakers produce subtle expressive variations, even when they’re asked to speak in a mostly-neutral style.

6 Human Language Technologies – Text to Speech © 2007 IBM Corporation 5Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Some observations about data and listeners…  Production variability:  Speakers produce subtle expressive variations, even when they’re asked to speak in a mostly-neutral style. Anger Fear Sad Neutral  Perceptual confusability/redundancy:  Several studies have shown that there’s an overlap in the way listeners interpret the prosodic-acoustic realizations of different expressions.

7 Human Language Technologies – Text to Speech © 2007 IBM Corporation 6Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Some observations about data and listeners…  Production variability:  Speakers produce subtle expressive variations, even when they’re asked to speak in a mostly-neutral style. Anger Fear Sad Neutral  Perceptual confusability/redundancy:  Several studies have shown that there’s an overlap in the way listeners interpret the prosodic-acoustic realizations of different expressions.

8 Human Language Technologies – Text to Speech © 2007 IBM Corporation 7Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007  Goals: Exploit the variability present in a given dataset to increase the expressive range of the TTS engine. Augment the corpus-based with an expression-mining approach for expressive synthesis.  Challenge: Automatic annotation of instances in the corpus where an expression of interest occurs. (Approach may still require collecting a smaller expression- specific corpus to bootstrap data-driven learning algorithms.)  Case study: Emphasis. Expression Mining

9 Human Language Technologies – Text to Speech © 2007 IBM Corporation 8Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Motivation. Review of Expressive TTS Architecture Expression Mining: Emphasis. Evaluation. Outline

10 Human Language Technologies – Text to Speech © 2007 IBM Corporation 9Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 The Expressive Framework of the IBM TTS System  The IBM Expressive Text-to-Speech consists of: a rules-based front-end for text analysis acoustic models (DTs) for generating synthesis candidate units prosody models (DTs) for generating pitch and duration targets a module to carry out a Viterbi search a waveform generation module to concatenate the selected units  Expressiveness is achieved in this framework by associating symbolic attribute vectors with the synthesis units. These attribute values are able to influence the target prosody generation unit-search selection

11 Human Language Technologies – Text to Speech © 2007 IBM Corporation 10Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Attributes Style Good News Apologetic Uncertain … Default Attribute

12 Human Language Technologies – Text to Speech © 2007 IBM Corporation 11Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Attributes Style Good News Apologetic Uncertain … Emphasis 0 1

13 Human Language Technologies – Text to Speech © 2007 IBM Corporation 12Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Attributes Style Good News Apologetic Uncertain … Emphasis 0 1 ? (e.g., voice quality={breathy,…}, etc.)

14 Human Language Technologies – Text to Speech © 2007 IBM Corporation 13Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 How do attributes influence the search? - Corpus is tagged a priori. - At run time: Input is tagged at the word level (e.g., via user-provided mark-up) with annotations indicating the desired attribute. Annotations are propagated down to the unit level. - A component of the target cost function penalizes label substitutions: NeutralGood newsBad news Neutral00.50.6 Good news0.301.0 Bad news0.51.00

15 Human Language Technologies – Text to Speech © 2007 IBM Corporation 14Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 How do attributes influence the search? - Additionally, the style attribute has style-specific prosody models (for pitch and duration) associated with it. Therefore, prosody targets are produced according to the style requested. Prosody Model Style 1 Prosody Model Style 3 Prosody Model Style 2 Prosody Targets Model Output Generation Normalized Text Target Style

16 Human Language Technologies – Text to Speech © 2007 IBM Corporation 15Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Motivation. Review of Expressive TTS Architecture Expression Mining: Emphasis. Evaluation. Outline

17 Human Language Technologies – Text to Speech © 2007 IBM Corporation 16Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Mining Emphasis Emphasis Corpus (~1K sents.) Statistical Learner Trained Emphasis Classifier Baseline Corpus w. Emphasis Labels Baseline Corpus (~10K sents.) Build TTS System w. Emphasis

18 Human Language Technologies – Text to Speech © 2007 IBM Corporation 17Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Training Materials Two sets of recordings, one from a female and one from a male speaker of US English. Approximately 1K sentences in script. Approximately 20% of words in script contain emphasis. Recordings are single channel, 22.05kHz.  To hear DIRECTIONS to this destination say YES.  I'd LOVE to hear how it SOUNDS.  It is BASED on the information that the company gathers, but not DEPENDENT on it. Exs:

19 Human Language Technologies – Text to Speech © 2007 IBM Corporation 18Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Modeling Emphasis – Classification Scheme - Modeled at the word level. - Feature set: prosodic features derived from (i) pitch (absolute; speaker- normalized), (ii) duration, and (iii) energy measures. - Individual classifiers are trained, and results stacked (this marginally improves the generalization performance estimated through 10-fold CV). K-Nearest Neighbor SVM Naïve Bayes Interm. Output Probs. Final Output Probs. Prosodic Features

20 Human Language Technologies – Text to Speech © 2007 IBM Corporation 19Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Modeling Emphasis – Classification Results TP Rate FP Rate Prec. F-Meas. Class 0.82 0.06 0.78 0.80 emphasis 0.94 0.18 0.95 0.94 notemphasis Correctly Classified Instances 91.2 % TP Rate FP Rate Prec. F-Meas. Class 0.80 0.06 0.75 0.77 emphasis 0.93 0.18 0.94 0.94 notemphasis Correctly Classified Instances 89.9 % MALEMALE FEMALEFEMALE

21 Human Language Technologies – Text to Speech © 2007 IBM Corporation 20Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus?

22 Human Language Technologies – Text to Speech © 2007 IBM Corporation 21Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus? I think they will diverge from bonds, and they may even go up.

23 Human Language Technologies – Text to Speech © 2007 IBM Corporation 22Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus?

24 Human Language Technologies – Text to Speech © 2007 IBM Corporation 23Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus? Please say the full name of the person you want to call.

25 Human Language Technologies – Text to Speech © 2007 IBM Corporation 24Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus?

26 Human Language Technologies – Text to Speech © 2007 IBM Corporation 25Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 What does it find in the corpus? There's a long fly ball to deep center field. Going, going. It's gone, a home run.

27 Human Language Technologies – Text to Speech © 2007 IBM Corporation 26Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Motivation. Review of Expressive TTS Architecture Expression Mining: Emphasis. Evaluation. Outline

28 Human Language Technologies – Text to Speech © 2007 IBM Corporation 27Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Listening Tests – Stimuli and Conditions Sent. Type Emphasis in Text? Baseline Neutral Units Baseline Corpus w/ Mined Emphasis Training Corpus w/ Explicit Emphasis AN BY CY Condition 1 Pair: 1 Type-A sentence vs. 1 Type-B sentence (in random order). Condition 2 Pair: 1 Type-A sentence vs. 1 Type-C sentence (in random order). Synthesis Sources Target

29 Human Language Technologies – Text to Speech © 2007 IBM Corporation 28Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 B1 vs A1 A2 vs B2 A3 vs B3 B12 vs A12 … A1 vs C1 A2 vs C2 C3 vs A3 C12 vs A12 … Condition 1 (12 Pairs) Condition 2 (12 Pairs) A2 vs C2 B1 vs A1 B12 vs A12 … A3 vs B3 + Shuffle C2 vs A2 A1 vs B1 A12 vs B12 … B3 vs A3 Reverse Order Pair Listening Tests – Setup LIST1LIST1 LIST2LIST2

30 Human Language Technologies – Text to Speech © 2007 IBM Corporation 29Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Listening Tests – Task Description A total of 31 participants listen to a playlist (16 to List 1; 15 to List 2) For each pair of stimuli, the listeners are asked to select which member of the pair contains emphasis-bearing words No information is given about which words may be emphasized. Listeners may opt to listen to a pair repeatedly.

31 Human Language Technologies – Text to Speech © 2007 IBM Corporation 30Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Listening Tests – Results ConditionNeutral (A)Emphatic (B/C) 161.6%38.4% 248.7%51.3%

32 Human Language Technologies – Text to Speech © 2007 IBM Corporation 31Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Conclusions When only the limited expressive corpus is considered, listeners actually prefer the neutral baseline. Possible explanation is that biasing the search heavily toward a small corpus is introducing artifacts that interfere with the perception of emphasis. However, when the small expressive corpus is augmented with automatic annotations, the perception of intended emphasis increases significantly by 13% (p<0.001). Although further work is needed to reliably convey emphasis, we have demonstrated the advantages of automatic mining the dataset to augment the search space of expressive synthesis units.

33 Human Language Technologies – Text to Speech © 2007 IBM Corporation 32Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Future Work Explore alternative feature sets to improve automatic emphasis classification. Extend the proposed framework to automatically detect more complex expressions in a “neutral” database and augment the search space for our expressive systems (e.g., good news; apologies; uncertainty) Explore how the perceptual confusion between different labels can be exploited to increase the range of expressiveness of the TTS system. N A GN U A U N

34 Human Language Technologies – Text-to-Speech © 2007 IBM Corporation Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Automatic Exploration of Corpus-Specific Properties for Expressive Text-to-Speech. (A Case Study in Emphasis.) Raul Fernandez and Bhuvana Ramabhadran I.B.M. T.J. Watson Research Center


Download ppt "Human Language Technologies – Text-to-Speech © 2007 IBM Corporation Sixth Speech Synthesis Workshop, Bonn, Germany.August 22-24, 2007 Automatic Exploration."

Similar presentations


Ads by Google