Statistical NLP Spring 2010

Slides:



Advertisements
Similar presentations
Acoustic/Prosodic Features
Advertisements

CS : Speech, NLP and the Web/Topics in AI
Building an ASR using HTK CS4706
Basic Spectrogram & Clinical Application: Consonants
Acoustic Characteristics of Consonants
Perturbation Theory March 11, 2013 Just So You Know The Fourier Analysis/Vocal Tract exercise is due on Wednesday. Please note: don’t make too much out.
1 CS 551/651: Structure of Spoken Language Spectrogram Reading: Stops John-Paul Hosom Fall 2010.
Speech Recognition with Hidden Markov Models Winter 2011
Nasal Stops.
Basic Spectrogram Lab 8. Spectrograms §Spectrograph: Produces visible patterns of acoustic energy called spectrograms §Spectrographic Analysis: l Acoustic.
The Human Voice. I. Speech production 1. The vocal organs
ACOUSTICAL THEORY OF SPEECH PRODUCTION
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
The Human Voice Chapters 15 and 17. Main Vocal Organs Lungs Reservoir and energy source Larynx Vocal folds Cavities: pharynx, nasal, oral Air exits through.
Speech Sound Production: Recognition Using Recurrent Neural Networks Abstract: In this paper I present a study of speech sound production and methods for.
PH 105 Dr. Cecilia Vogel Lecture 14. OUTLINE  consonants  vowels  vocal folds as sound source  formants  speech spectrograms  singing.
Unit 4 Articulation I.The Stops II.The Fricatives III.The Affricates IV.The Nasals.
Natural Language Processing - Speech Processing -
Application of HMMs: Speech recognition “Noisy channel” model of speech.
LSA 352 Speech Recognition and Synthesis
CS 188: Artificial Intelligence Fall 2009 Lecture 21: Speech Recognition 11/10/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
COMP 4060 Natural Language Processing Speech Processing.
CS 224S / LINGUIST 285 Spoken Language Processing
CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 5: Acoustic Modeling with Gaussians.
Pitch changes result from changing the length and tension of the vocal folds The pitch you produce is based on the number of cycles per second Hertz (Hz)
Representing Acoustic Information
Phonetics HSSP Week 5.
CS 551/651: Structure of Spoken Language Lecture 1: Visualization of the Speech Signal, Introductory Phonetics John-Paul Hosom Fall 2010.
Audio Processing for Ubiquitous Computing Uichin Lee KAIST KSE.
Speech Signal Processing
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Acoustic Phonetics 3/9/00. Acoustic Theory of Speech Production Modeling the vocal tract –Modeling= the construction of some replica of the actual physical.
Speech and Language Processing
Speech Science VII Acoustic Structure of Speech Sounds WS
ECE 598: The Speech Chain Lecture 7: Fourier Transform; Speech Sources and Filters.
A brief overview of Speech Recognition and Spoken Language Processing Advanced NLP Guest Lecture August 31 Andrew Rosenberg.
Speech Science Fall 2009 Oct 28, Outline Acoustical characteristics of Nasal Speech Sounds Stop Consonants Fricatives Affricates.
Say “blink” For each segment (phoneme) write a script using terms of the basic articulators that will say “blink.” Consider breathing, voicing, and controlling.
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Speaker Recognition by Habib ur Rehman Abdul Basit CENTER FOR ADVANCED STUDIES IN ENGINERING Digital Signal Processing ( Term Project )
CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley.
Speech Science VI Resonances WS Resonances Reading: Borden, Harris & Raphael, p Kentp Pompino-Marschallp Reetzp
Statistical NLP Spring 2011
Stop Acoustics and Glides December 2, 2013 Where Do We Go From Here? The Final Exam has been scheduled! Wednesday, December 18 th 8-10 am (!) Kinesiology.
Stop + Approximant Acoustics
Ch4 – Features Features are partly acoustic partly articulatory aspects of sounds but they are used for phonology so sometimes they are created to distinguish.
CS 188: Artificial Intelligence Spring 2007 Speech Recognition 03/20/2007 Srini Narayanan – ICSI and UC Berkeley.
P105 Lecture #27 visuals 20 March 2013.
Acoustic Phonetics 3/14/00.
Combining Phonetic Attributes Using Conditional Random Fields Jeremy Morris and Eric Fosler-Lussier – Department of Computer Science and Engineering A.
Structure of Spoken Language
The Human Voice. 1. The vocal organs
CS 224S / LINGUIST 285 Spoken Language Processing
Essentials of English Phonetics
Statistical Models for Automatic Speech Recognition
Structure of Spoken Language
The Human Voice. 1. The vocal organs
Lecture 9: Speech Recognition (I) October 26, 2004 Dan Jurafsky
Speech Processing August 10, /10/2018.
Jennifer J. Venditti Postdoctoral Research Associate
Speech Generation and Perception
Speech Processing Speech Recognition
CS 188: Artificial Intelligence Fall 2008
Statistical Models for Automatic Speech Recognition
Speech Processing Speech Recognition
Speech Perception (acoustic cues)
CS 188: Artificial Intelligence Spring 2006
Speech Generation and Perception
Phonetics and Phonemics
Presentation transcript:

Statistical NLP Spring 2010 Lecture 8: Speech Signal Dan Klein – UC Berkeley

Speech in a Slide s p ee ch l a b Frequency gives pitch; amplitude gives volume Frequencies at each time slice processed into observation vectors s p ee ch l a b amplitude frequency ……………………………………………..a12a13a12a14a14………..

Articulatory System Nasal cavity Oral cavity Pharynx Vocal folds (in the larynx) Trachea Lungs Oral cavity Sagittal section of the vocal tract (Techmer 1880) Text from Ohala, Sept 2001, from Sharon Rose slide

Places of Articulation alveolar post-alveolar/palatal dental velar uvular labial pharyngeal laryngeal/glottal Figure thanks to Jennifer Venditti

Labial place Bilabial: p, b, m labiodental Labiodental: f, v bilabial Figure thanks to Jennifer Venditti

Coronal place alveolar post-alveolar/palatal dental Dental: th/dh t/d/s/z/l/n Post: sh/zh/y Figure thanks to Jennifer Venditti

Dorsal Place Velar: k/g/ng velar uvular pharyngeal Figure thanks to Jennifer Venditti

Space of Phonemes Standard international phonetic alphabet (IPA) chart of consonants

Manner of Articulation In addition to varying by place, sounds vary by manner Stop: complete closure of articulators, no air escapes via mouth Oral stop: palate is raised (p, t, k, b, d, g) Nasal stop: oral closure, but palate is lowered (m, n, ng) Fricatives: substantial closure, turbulent: (f, v, s, z) Approximants: slight closure, sonorant: (l, r, w) Vowels: no closure, sonorant: (i, e, a)

Space of Phonemes Standard international phonetic alphabet (IPA) chart of consonants

Oral vs. Nasal Sounds Figure from Jong-bok Kim

Vowels IY AA UW Figure from Eric Keller

Vowel Space

“She just had a baby” What can we learn from a wavefile? No gaps between words (!) Vowels are voiced, long, loud Length in time = length in space in waveform picture Voicing: regular peaks in amplitude When stops closed: no peaks, silence Peaks = voicing: .46 to .58 (vowel [iy], from second .65 to .74 (vowel [ax]) and so on Silence of stop closure (1.06 to 1.08 for first [b], or 1.26 to 1.28 for second [b]) Fricatives like [sh]: intense irregular pattern; see .33 to .46

Non-Local Cues pat pad bad spat Example from Ladefoged

Simple Periodic Waves of Sound Y axis: Amplitude = amount of air pressure at that point in time Zero is normal air pressure, negative is rarefaction X axis: Time. Frequency = number of cycles per second. 20 cycles in .02 seconds = 1000 cycles/second = 1000 Hz

Complex Waves: 100Hz+1000Hz

Spectrum Frequency components (100 and 1000 Hz) on x-axis Amplitude Frequency in Hz

Spectrum of an Actual Soundwave

Waveforms for Speech Waveform of the vowel [iy] Frequency: repetitions/second of a wave Above vowel has 28 reps in .11 secs So freq is 28/.11 = 255 Hz This is speed that vocal folds move, hence voicing Amplitude: y axis: amount of air pressure at that point in time Zero is normal air pressure, negative is rarefaction

Part of [ae] waveform from “had” Note complex wave repeating nine times in figure Plus smaller waves which repeats 4 times for every large pattern Large wave has frequency of 250 Hz (9 times in .036 seconds) Small wave roughly 4 times this, or roughly 1000 Hz Two little tiny waves on top of peak of 1000 Hz waves

Back to Spectra Spectrum represents these freq components Computed by Fourier transform, algorithm which separates out each frequency component of wave. x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) Peaks at 930 Hz, 1860 Hz, and 3020 Hz.

Why these Peaks? Articulator process: The vocal cord vibrations create harmonics The mouth is an amplifier Depending on shape of mouth, some harmonics are amplified more than others

Vowel [i] sung at successively higher pitches F#2 A2 C3 F#3 A3 C4 (middle C) A4 Figures from Ratree Wayland

Deriving Schwa Reminder of basic facts about sound waves f = c/ c = speed of sound (approx 35,000 cm/sec) A sound with =10 meters: f = 35 Hz (35,000/1000) A sound with =2 centimeters: f = 17,500 Hz (35,000/2)

Resonances of the Vocal Tract The human vocal tract as an open tube: Air in a tube of a given length will tend to vibrate at resonance frequency of tube. Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end. Closed end Open end Length 17.5 cm. Figure from W. Barry

From Sundberg

Computing the 3 Formants of Schwa Let the length of the tube be L F1 = c/1 = c/(4L) = 35,000/4*17.5 = 500Hz F2 = c/2 = c/(4/3L) = 3c/4L = 3*35,000/4*17.5 = 1500Hz F3 = c/3 = c/(4/5L) = 5c/4L = 5*35,000/4*17.5 = 2500Hz So we expect a neutral vowel to have 3 resonances at 500, 1500, and 2500 Hz These vowel resonances are called formants

From Mark Liberman’s Web site

Seeing Formants: the Spectrogram

Vowel Space

American English Vowel Space FRONT BACK HIGH LOW iy ih eh ae aa ao uw uh ah ax ix ux ey ow aw oy ay Figures from Jennifer Venditti, H. T. Bunnell

Dialect Issues American British Speech varies from dialect to dialect (examples are American vs. British English) Syntactic (“I could” vs. “I could do”) Lexical (“elevator” vs. “lift”) Phonological Phonetic Mismatch between training and testing dialects can cause a large increase in error rate all old

How to Read Spectrograms bab: closure of lips lowers all formants: so rapid increase in all formants at beginning of "bab” dad: first formant increases, but F2 and F3 slight fall gag: F2 and F3 come together: this is a characteristic of velars. Formant transitions take longer in velars than in alveolars or labials From Ladefoged “A Course in Phonetics”

“She came back and started again” 1. lots of high-freq energy 3. closure for k 4. burst of aspiration for k 5. ey vowel; faint 1100 Hz formant is nasalization 6. bilabial nasal 7. short b closure, voicing barely visible. 8. ae; note upward transitions after bilabial stop at beginning 9. note F2 and F3 coming together for "k" From Ladefoged “A Course in Phonetics”

The Noisy Channel Model Search through space of all possible sentences. Pick the one that is most probable given the waveform.

Speech Recognition Architecture

Digitizing Speech

Frame Extraction . . . A frame (25 ms wide) extracted every 10 ms a1 a2 a3 Figure from Simon Arnfield

Mel Freq. Cepstral Coefficients Do FFT to get spectral information Like the spectrogram/spectrum we saw earlier Apply Mel scaling Linear below 1kHz, log above, equal samples above and below 1kHz Models human ear; more sensitivity in lower freqs Plus Discrete Cosine Transformation

Final Feature Vector 39 (real) features per 10 ms frame: 12 MFCC features 12 Delta MFCC features 12 Delta-Delta MFCC features 1 (log) frame energy 1 Delta (log) frame energy 1 Delta-Delta (log frame energy) So each frame is represented by a 39D vector

HMMs for Speech

Phones Aren’t Homogeneous

Need to Use Subphones

A Word with Subphones

Viterbi Decoding

ASR Lexicon: Markov Models

HMMs for Continuous Observations? Before: discrete, finite set of observations Now: spectral feature vectors are real-valued! Solution 1: discretization Solution 2: continuous emissions models Gaussians Multivariate Gaussians Mixtures of Multivariate Gaussians A state is progressively: Context independent subphone (~3 per phone) Context dependent phone (=triphones) State-tying of CD phone

Vector Quantization Idea: discretization Map MFCC vectors onto discrete symbols Compute probabilities just by counting This is called Vector Quantization or VQ Not used for ASR any more; too simple Useful to consider as a starting point

Gaussian Emissions VQ is insufficient for real ASR Instead: Assume the possible values of the observation vectors are normally distributed. Represent the observation likelihood function as a Gaussian with mean j and variance j2

Gaussians for Acoustic Modeling A Gaussian is parameterized by a mean and a variance: Different means P(o|q): P(o|q) is highest here at mean P(o|q is low here, very far from mean) P(o|q) o

Multivariate Gaussians Instead of a single mean  and variance : Vector of means  and covariance matrix  Usually assume diagonal covariance This isn’t very true for FFT features, but is fine for MFCC features

Gaussian Intuitions: Size of   = [0 0]  = [0 0]  = [0 0]  = I  = 0.6I  = 2I As  becomes larger, Gaussian becomes more spread out; as  becomes smaller, Gaussian more compressed Text and figures from Andrew Ng’s lecture notes for CS229

Gaussians: Off-Diagonal As we increase the off-diagonal entries, more correlation between value of x and value of y Text and figures from Andrew Ng’s lecture notes for CS229

In two dimensions From Chen, Picheny et al lecture slides

In two dimensions From Chen, Picheny et al lecture slides

But we’re not there yet Single Gaussian may do a bad job of modeling distribution in any dimension: Solution: Mixtures of Gaussians Figure from Chen, Picheney et al slides

Mixtures of Gaussians M mixtures of Gaussians: For diagonal covariance:

GMMs Summary: each state has a likelihood function parameterized by: M Mixture weights M Mean Vectors of dimensionality D Either M Covariance Matrices of DxD Or more likely M Diagonal Covariance Matrices of DxD which is equivalent to M Variance Vectors of dimensionality D

Training Mixture Models Forced Alignment Computing the “Viterbi path” over the training data is called “forced alignment” We know which word string to assign to each observation sequence. We just don’t know the state sequence. So we constrain the path to go through the correct words And otherwise do normal Viterbi Result: state sequence!

Modeling phonetic context W iy r iy m iy n iy

“Need” with triphone models

Implications of Cross-Word Triphones Possible triphones: 50x50x50=125,000 How many triphone types actually occur? 20K word WSJ Task (from Bryan Pellom) Word-internal models: need 14,300 triphones Cross-word models: need 54,400 triphones But in training data only 22,800 triphones occur! Need to generalize models.

State Tying / Clustering [Young, Odell, Woodland 1994] How do we decide which triphones to cluster together? Use phonetic features (or ‘broad phonetic classes’) Stop Nasal Fricative Sibilant Vowel lateral

State Tying Creating CD phones: Start with monophone, do EM training Clone Gaussians into triphones Build decision tree and cluster Gaussians Clone and train mixtures (GMMs

Simple Periodic Waves Characterized by: period: T amplitude A phase  Fundamental frequency F0=1/T 1 cycle