Julia Hirschberg and Sarah Ita Levitan CS 6998

Slides:



Advertisements
Similar presentations
Acoustic/Prosodic Features
Advertisements

MULTIMEDIA TUTORIAL PART - III SHASHI BHUSHAN SOCIS, IGNOU.
Digital Signal Processing
Time-Frequency Analysis Analyzing sounds as a sequence of frames
5/5/20151 Acoustics of Speech Julia Hirschberg CS 4706.
The frequency spectrum
Basic Spectrogram Lab 8. Spectrograms §Spectrograph: Produces visible patterns of acoustic energy called spectrograms §Spectrographic Analysis: l Acoustic.
Introduction to Acoustics Words contain sequences of sounds Each sound (phone) is produced by sending signals from the brain to the vocal articulators.
Overview What is in a speech signal?
6/24/20151 Acoustics of Speech Julia Hirschberg CS 4706.
William Stallings Data and Computer Communications 7th Edition (Selected slides used for lectures at Bina Nusantara University) Data, Signal.
A PRESENTATION BY SHAMALEE DESHPANDE
Digital Audio Multimedia Systems (Module 1 Lesson 1)
 Principles of Digital Audio. Analog Audio  3 Characteristics of analog audio signals: 1. Continuous signal – single repetitive waveform 2. Infinite.
Basic Acoustics + Digital Signal Processing September 11, 2014.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 2 –Auditory Perception and Digital Audio Klara Nahrstedt Spring 2011.
Representing Acoustic Information
Audio Fundamentals Lesson 2 Sound, Sound Wave and Sound Perception
LE 460 L Acoustics and Experimental Phonetics L-13
Computer Science 121 Scientific Computing Winter 2014 Chapter 13 Sounds and Signals.
Sampling Terminology f 0 is the fundamental frequency (Hz) of the signal –Speech: f 0 = vocal cord vibration frequency (>=80Hz) –Speech signals contain.
Lab #8 Follow-Up: Sounds and Signals* * Figures from Kaplan, D. (2003) Introduction to Scientific Computation and Programming CLI Engineering.
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Overview of MIR Systems Audio and Music Representations (Part 1) 1.
Digital Audio Watermarking: Properties, characteristics of audio signals, and measuring the performance of a watermarking system نيما خادمي کلانتري
COMP Representing Sound in a ComputerSound Course book - pages
C-15 Sound Physics Properties of Sound If you could see atoms, the difference between high and low pressure is not as great. The image below is.
Automatic Pitch Tracking September 18, 2014 The Digitization of Pitch The blue line represents the fundamental frequency (F0) of the speaker’s voice.
CSC361/661 Digital Media Spring 2002
Media Representations - Audio
Automatic Pitch Tracking January 16, 2013 The Plan for Today One announcement: Starting on Monday of next week, we’ll meet in Craigie Hall D 428 We’ll.
Digital Recording Theory Using Peak. Listening James Tenney, Collage #1 (“Blue Suede”),  Available in Bracken Library, on James Tenney Selected.
15.1 Properties of Sound  If you could see atoms, the difference between high and low pressure is not as great.  The image below is exaggerated to show.
Acoustic Analysis of Speech Robert A. Prosek, Ph.D. CSD 301 Robert A. Prosek, Ph.D. CSD 301.
Preprocessing Ch2, v.5a1 Chapter 2 : Preprocessing of audio signals in time and frequency domain  Time framing  Frequency model  Fourier transform 
Module 2 SPECTRAL ANALYSIS OF COMMUNICATION SIGNAL.
COSC 1P02 Introduction to Computer Science 4.1 Cosc 1P02 Week 4 Lecture slides “Programs are meant to be read by humans and only incidentally for computers.
Introduction to SOUND.
David Meredith Aalborg University
Georgia Institute of Technology Introduction to Processing Digital Sounds part 1 Barb Ericson Georgia Institute of Technology Sept 2005.
Pitch Tracking + Prosody January 17, 2012 The Plan for Today One announcement: On Thursday, we’ll meet in the Craigie Hall D 428 We’ll be working on.
1 Introduction to Information Technology LECTURE 6 AUDIO AS INFORMATION IT 101 – Section 3 Spring, 2005.
Submitted By: Santosh Kumar Yadav (111432) M.E. Modular(2011) Under the Supervision of: Mrs. Shano Solanki Assistant Professor, C.S.E NITTTR, Chandigarh.
Digital Signal Processing January 16, 2014 Analog and Digital In “reality”, sound is analog. variations in air pressure are continuous = it has an amplitude.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
Audio processing methods on marine mammal vocalizations Xanadu Halkias Laboratory for the Recognition and Organization of Speech and Audio
Hearing: Physiology and Psychoacoustics 9. The Function of Hearing The basics Nature of sound Anatomy and physiology of the auditory system How we perceive.
Encoding and Simple Manipulation
Frequency, Pitch, Tone and Length February 12, 2014 Thanks to Chilin Shih for making some of these lecture materials available.
Intro-Sound-part1 Introduction to Processing Digital Sounds part 1 Barb Ericson Georgia Institute of Technology Oct 2009.
Encoding How is information represented?. Way of looking at techniques Data Medium Digital Analog Digital Analog NRZ Manchester Differential Manchester.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 3 – Digital Audio Representation Klara Nahrstedt Spring 2014.
Multimedia Sound. What is Sound? Sound, sound wave, acoustics Sound is a continuous wave that travels through a medium Sound wave: energy causes disturbance.
Session 18 The physics of sound and the manipulation of digital sounds.
Digital Audio I. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
HOW WE TRANSMIT SOUNDS? Media and communication 김경은 김다솜 고우.
Lifecycle from Sound to Digital to Sound. Characteristics of Sound Amplitude Wavelength (w) Frequency ( ) Timbre Hearing: [20Hz – 20KHz] Speech: [200Hz.
Basic Acoustics + Digital Signal Processing January 11, 2013.
Ch. 2 : Preprocessing of audio signals in time and frequency domain
Spectrum Analysis and Processing
The Physics of Sound.
Multimedia Systems and Applications
"Digital Media Primer" Yue-Ling Wong, Copyright (c)2013 by Pearson Education, Inc. All rights reserved.
Acoustics of Speech Julia Hirschberg CS /7/2018.
Acoustics of Speech Julia Hirschberg CS /10/2018.
Analyzing the Speech Signal
C-15 Sound Physics 1.
Analyzing the Speech Signal
Acoustics of Speech Julia Hirschberg CS /2/2019.
Assist. Lecturer Safeen H. Rasool Collage of SCIENCE IT Dept.
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

Julia Hirschberg and Sarah Ita Levitan CS 6998 Acoustics of Speech Julia Hirschberg and Sarah Ita Levitan CS 6998 5/7/2019

Goal 1: Distinguishing One Phoneme from Another, Automatically ASR: Did the caller say ‘I want to fly to Newark’ or ‘I want to fly to New York’? Forensic Linguistics: Did the accused say ‘Kill him’ or ‘Bill him’? What evidence is there in the speech signal? How accurately and reliably can we extract it? 5/7/2019

Goal 2: Determining How things are said is sometimes critical to understanding Intonation Forensic Linguistics: ‘Kill him!’ or ‘Kill him?’ TTS: ‘Are you leaving tomorrow./?’ What information do we need to extract from/generate in the speech signal? What tools do we have to do this? 5/7/2019

Today and Next Class How do we define cues to segments and intonation? Fundamental frequency (pitch) Amplitude/energy (loudness) Spectral features Timing (pauses, rate) Voice Quality How do we extract them? Praat openSMILE libROSA 5/7/2019

Outline Sound production Capturing speech for analysis Feature extraction Spectrograms 5/7/2019

Sound Production Pressure fluctuations in the air caused by a musical instrument, a car horn, a voice… Sound waves propagate thru e.g. air (marbles, stone-in-lake) Cause eardrum (tympanum) to vibrate Auditory system translates into neural impulses Brain interprets as sound Plot sounds as change in air pressure over time From a speech-centric point of view, sound not produced by the human voice is noise Ratio of speech-generated sound to other simultaneous sound: Signal-to-Noise ratio 5/7/2019

Simple periodic waves Characterized by frequency and amplitude Frequency (f) – cycles per second Amplitude – maximum value in y axis Period (T) – time it takes for one cycle to complete 5/7/2019

Speech sound waves + compression 0 normal air pressure - rarefaction (decompression) 5/7/2019

How do we capture speech for analysis? Recording conditions Quiet office, sound booth, anechoic chamber 5/7/2019

How do we capture speech for analysis? Recording conditions Microphones convert sounds into electrical current: oscillations of air pressure become oscillations of voltage in an electric circuit Analog devices (e.g. tape recorders) store these as a continuous signal Digital devices (e.g. computers, DAT) first convert continuous signals into discrete signals (digitizing) 5/7/2019

Sampling Sampling rate: how often do we need to sample? At least 2 samples per cycle to capture periodicity of a waveform component at a given frequency 100 Hz waveform needs 200 samples per sec Nyquist frequency: highest-frequency component captured with a given sampling rate (half the sampling rate) – e.g. 8K sampling rate (telephone speech) captures frequencies up to 4K 5/7/2019

Sampling

Sampling/storage tradeoff Human hearing: ~20K top frequency Do we really need to store 40K samples per second of speech? Telephone speech: 300-4K Hz (8K sampling) But some speech sounds (e.g. fricatives, stops) have energy above 4K… 44k (CD quality audio) vs.16-22K (usually good enough to study pitch, amplitude, duration, …) Golden Ears… 5/7/2019

What your phone is not telling you High frequency samples 5/7/2019

The Mosquito Listen (at your own risk) 5/7/2019

Sampling Errors Aliasing: Signals frequency higher than the Nyquist frequency Solutions: Increase the sampling rate Filter out frequencies above half the sampling rate (anti-aliasing filter) 5/7/2019

Quantization Measuring the amplitude at sampling points: what resolution to choose? Integer representation 8, 12 or 16 bits per sample Noise due to quantization steps avoided by higher resolution -- but requires more storage How many different amplitude levels do we need to distinguish? Choice depends on data and application (44K 16bit stereo requires ~10Mb storage) 5/7/2019

Watch for this when you are recording for TTS! Solutions But clipping occurs when input volume (i.e. amplitude of signal) is greater than range that can be represented Watch for this when you are recording for TTS! Solutions Increase the resolution Decrease the amplitude 5/7/2019

Clipping

WAV format Many formats, trade-offs in compression, quality Sox can be used to convert speech formats

Frequency How fast are the vocal folds vibrating? Vocal folds vibrate for voiced sounds, causing regular peaks in amplitude F0 – Fundamental frequency of the waveform Pitch track – plot F0 over time 5/7/2019

Estimating pitch Frequency = cycles per second Simple approach: zero crossings 5/7/2019

Autocorrelation Idea: figure out the period between successive cycles of the wave F0 = 1/period Where does one cycle end and another begin? Find the similarity between the signal and a shifted version of itself Period is the chunk of the segment that matches best with the next chunk 5/7/2019

Parameters Window size (chunk size) Step size Frequency range 5/7/2019

Microprosody effects of consonants (e.g. /v/) Creaky voice  no pitch track Errors to watch for in reading pitch tracks: Halving: shortest lag calculated is too long  estimated cycle too long, too few cycles per sec (underestimate pitch) Doubling: shortest lag too short and second half of cycle similar to first  cycle too short, too many cycles per sec (overestimate pitch)

Pitch Halving pitch is halved

Pitch Doubling pitch is doubled

Pitch track

Pitch vs. F0 Pitch is the perceptual correlate of F0 Relationship between pitch and F0 is not linear; human pitch perception is most accurate between 100Hz and 1000Hz. Mel scale Frequency in mels = 1127 ln (1 + f/700)

Mel-scale Human hearing is not equally sensitive to all frequency bands Less sensitive at higher frequencies, roughly > 1000 Hz I.e. human perception of frequency is non-linear:

Amplitude Maximum value on y axis of a waveform Amount of air pressure variation: + compression, 0 normal, - rarefaction 5/7/2019

Intensity Perceived as loudness Measured in decibels (dB) P0 = auditory threshold pressure = 5/7/2019

Plot of Intensity

How ‘Loud’ are Common Sounds – How Much Pressure Generated? Event Pressure (Pa) Db Absolute 20 0 Whisper 200 20 Quiet office 2K 40 Conversation 20K 60 Bus 200K 80 Subway 2M 100 Thunder 20M 120 *DAMAGE* 200M 140 Pa = Pascal Db = Decibel 5/7/2019

5/7/2019

5/7/2019

Voice quality Jitter Shimmer HNR 5/7/2019

Voice quality Jitter random cycle-to-cycle variability in F0 Pitch perturbation Shimmer random cycle-to-cycle variability in amplitude Amplitude perturbation 5/7/2019

Jitter https://homepages.wmich.edu/~hillenbr/501.html

Shimmer https://homepages.wmich.edu/~hillenbr/501.html

Synthetic Continuum Varying in Jitter 0.0% 2.0% 0.2% 2.5% 0.4% 3.0% 0.6% 4.0% 0.8% 5.0% 1.0% 6.0% 1.5% https://homepages.wmich.edu/~hillenbr/501.html

Synthetic Continuum Varying in Shimmer 0.00 dB 1.60 dB 0.20 dB 1.80 dB 0.40 dB 2.00 dB 0.60 dB 2.25 dB 0.80 dB 2.50 dB 1.00 dB 2.75 dB 1.20 dB 3.00 dB 1.40 dB https://homepages.wmich.edu/~hillenbr/501.html

HNR Harmonics to Noise Ratio Ratio between periodic and aperiodic components of a voiced speech segment Periodic: vocal fold vibration Non-periodic: glottal noise Lower HNR -> more noise in signal, perceived as hoarseness, breathiness, roughness 5/7/2019

Reading waveforms 5/7/2019

Fricative 5/7/2019

Spectrum Representation of the different frequency components of a wave Computed by a Fourier transform Linear Predictive Coding (LPC) spectrum – smoothed version 5/7/2019

Part of [ae] waveform from “had” Complex wave repeated 10 times (~234 Hz) Smaller waves repeated 4 times per large (~936 Hz) Two tiny waves on the peak of 936 Hz waves (~1872 Hz) 5/7/2019

Spectrum Represents frequency components Computed by Fourier transform Peaks at 930 Hz, 1860 Hz, and 3020 Hz 5/7/2019

Spectrogram 5/7/2019

Spectrogram 5/7/2019

Source-filter model Why do different vowels have different spectral signatures? Source = glottis Filter = vocal tract When we produce vowels, we change the shape of the vocal tract cavity by placing articulators in particular positions 5/7/2019

Reading spectrograms 5/7/2019

Mystery Spectrogram 5/7/2019

MFCC Sounds that we generate are filtered by the shape of the vocal tract (tongue, teeth etc.) If we can determine the shape, we can identify the phoneme being produced Mel Frequency Cepstral Coefficients are widely used features in speech recognition 5/7/2019

MFCC calculation Frame the signal into short frames Take the Fourier transform of the signal Apply mel filterbank to power spectra, sum energy in each filter Take the log of all filterbank energies Take the DCT of the log filterbank energies Keep DCT coefficients 2-13 5/7/2019

Next class Download Praat from the link on the course syllabus page Read the Praat tutorial linked from the syllabus Bring a laptop and headphones to class 5/7/2019