Speech Processing AEGIS RET All-Hands Meeting University of Central Florida July 20, 2012 Applications of Images and Signals in High Schools.

Slides:



Advertisements
Similar presentations
DCSP-12 Jianfeng Feng
Advertisements

Fast n Furious Transforms. Welcome to my Journey Pitch Detection Fourier Transform Signal Visualization struct _unknown { byte typeField; short lenField;
Fourier Transforms and Their Use in Data Compression
Voiceprint System Development Design, implement, test unique voiceprint biometric system Research Day Presentation, May 3 rd 2013 Rahul Raj (Team Lead),
Easily extensible unix software for spectral analysis, display modification, and synthesis of musical sounds James W. Beauchamp School of Music Dept.
ACHIZITIA IN TIMP REAL A SEMNALELOR. Three frames of a sampled time domain signal. The Fast Fourier Transform (FFT) is the heart of the real-time spectrum.
CMPS1371 Introduction to Computing for Engineers PROCESSING SOUNDS.
Fourier Transform – Chapter 13. Image space Cameras (regardless of wave lengths) create images in the spatial domain Pixels represent features (intensity,
Speech Processing AEGIS RET All-Hands Meeting University of Central Florida July 20, 2012 Applications of Images and Signals in High Schools.
Han Q Le© ECE 3336 Introduction to Circuits & Electronics Lecture Set #10 Signal Analysis & Processing – Frequency Response & Filters Dr. Han Le ECE Dept.
SIMS-201 Characteristics of Audio Signals Sampling of Audio Signals Introduction to Audio Information.
Introduction The aim the project is to analyse non real time EEG (Electroencephalogram) signal using different mathematical models in Matlab to predict.
SYED SYAHRIL TRADITIONAL MUSICAL INSTRUMENT SIMULATOR FOR GUITAR1.
Pitch Recognition with Wavelets Final Presentation by Stephen Geiger.
Introduction to Matlab II EE 2303 Lab. Basic Matlab Review Data file input/output string, char, double, struct  Types of variables load, save  directory/workspace.
A STUDY ON SPEECH RECOGNITION USING DYNAMIC TIME WARPING CS 525 : Project Presentation PALDEN LAMA and MOUNIKA NAMBURU.
1 The Mathematics of Signal Processing - an Innovative Approach Peter Driessen Faculty of Engineering University of Victoria.
Multi-Resolution Analysis (MRA)
A STUDY ON SPEECH RECOGNITION USING DYNAMIC TIME WARPING CS 525 : Project Presentation PALDEN LAMA and MOUNIKA NAMBURU.
Warped Linear Prediction Concept: Warp the spectrum to emulate human perception; then perform linear prediction on the result Approaches to warp the spectrum:
Representing Acoustic Information
Graphic Equalizer Table By Jose Lerma. Main Idea The main idea of this table is to display the frequencies of any sound or audio input, either by microphone.
Ni.com Data Analysis: Time and Frequency Domain. ni.com Typical Data Acquisition System.
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Overview of MIR Systems Audio and Music Representations (Part 1) 1.
Basics of Signal Processing. SIGNALSOURCE RECEIVER describe waves in terms of their significant features understand the way the waves originate effect.
Motivation Music as a combination of sounds at different frequencies
Fourier series. The frequency domain It is sometimes preferable to work in the frequency domain rather than time –Some mathematical operations are easier.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 19.
Jacob Zurasky ECE5526 – Spring 2011
Dan Rosenbaum Nir Muchtar Yoav Yosipovich Faculty member : Prof. Daniel LehmannIndustry Representative : Music Genome.
1 Prof. Nizamettin AYDIN Digital Signal Processing.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
Digital Image Processing Chapter 4 Image Enhancement in the Frequency Domain Part I.
Complex Auditory Stimuli
Speech Recognition Feature Extraction. Speech recognition simplified block diagram Speech Capture Speech Capture Feature Extraction Feature Extraction.
NSF/STEER Program California State University, Los Angeles Summer 2003 Digital Signal Processing Laboratory Mentored by Dr. Jeffrey Y. Beyon Presented.
CSCI-100 Introduction to Computing Hardware Part II.
Instructor: Mian Shahzad Iqbal
Fourier and Wavelet Transformations Michael J. Watts
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
CS 376b Introduction to Computer Vision 03 / 17 / 2008 Instructor: Michael Eckmann.
The Discrete Fourier Transform
Speech Processing Using HTK Trevor Bowden 12/08/2008.
Time Compression/Expansion Independent of Pitch. Listening Dies Irae from Requiem, by Michel Chion (1973)
Project Presentation Eoin Culhane Multi Channel Music Recognition for an Electric Guitar.
The Frequency Domain Digital Image Processing – Chapter 8.
Bryant Tober. Problem Description  View the sound wave produced from a wav file  Apply different modulations to the wave file  Hear the effect of the.
بسم الله الرحمن الرحيم Lecture (1) Introduction to DSP Dr. Iman Abuel Maaly University of Khartoum Department of Electrical and Electronic Engineering.
DSP First, 2/e Lecture 6 Periodic Signals, Harmonics & Time-Varying Sinusoids.
Speech Processing Dr. Veton Këpuska, FIT Jacob Zurasky, FIT.
Lecture 6 Periodic Signals, Harmonics & Time-Varying Sinusoids
Speech Processing AEGIS RET All-Hands Meeting
Ch. 2 : Preprocessing of audio signals in time and frequency domain
Fourier Series Prof. Brian L. Evans
CS 591 S1 – Computational Audio
Spectrum Analysis and Processing
CS 591 S1 – Computational Audio
Speech Processing AEGIS RET All-Hands Meeting
ARTIFICIAL NEURAL NETWORKS
Speech Processing AEGIS RET All-Hands Meeting
Spoken Digit Recognition
Fourier and Wavelet Transformations
For a periodic complex sound
LECTURE 18: FAST FOURIER TRANSFORM
Richard M. Stern demo January 12, 2009
Digital Systems: Hardware Organization and Design
ECE 791 Project Proposal Project Title: Developing and Evaluating a Tool for Converting MP3 Audio Files to Staff Music Project Team: Salvatore DeVito.
LECTURE 18: FAST FOURIER TRANSFORM
Geol 491: Spectral Analysis
Presentation transcript:

Speech Processing AEGIS RET All-Hands Meeting University of Central Florida July 20, 2012 Applications of Images and Signals in High Schools

Contributors Dr. Veton Këpuska, Faculty Mentor, FIT Jacob Zurasky, Graduate Student Mentor, FIT Becky Dowell, RET Teacher, BPS Titusville High

Speech Processing Project Speech recognition requires speech to first be characterized by a set of “features” Features are used to determine what words are spoken. Our project implements the feature extraction stage of a speech processing application.

Timeline 1874: Alexander Graham Bell proves frequency harmonics from electrical signal can be divided 1952: Bell Labs develops first effective speech recognizer DARPA: speech should be understood, not just recognized 1980’s: Call center and text-to-speech products commercially available 1990’s: PC processing power allows use of SR software by ordinary user Timeline of Speech Recognition.

Applications Call center speech recognition Speech-to-text applications – Dictation software – Visual voice mail Hands-free user-interface – Siri – OnStar – XBOX Kinect Medical Applications – Parkinson’s Voice Initiative – Detection of sleep disorders

Difficulties Continuous Speech (word boundaries) Noise – Background – Other speakers Differences in speakers – Dialects/Accents – Male/female

Speech Recognition Front End: Pre-processing Back End: Recognition Speech Recognized speech Large amount of data. Ex: 256 samples Features Reduced data size. Ex: 13 features Front End – reduce amount of data for back end, but keep enough data to accurately describe the signal. Output is feature vector. 256 samples > 13 features Back End - statistical models used to classify feature vectors as a certain sound in speech

Front-End Processing of Speech Recognizer Pre- emphasis High pass filter to compensate for higher frequency roll off in human speech

Front-End Processing of Speech Recognizer Pre- emphasis Window High pass filter to compensate for higher frequency roll off in human speech Separate speech signal into frames Apply window to smooth edges of framed speech signal

Front-End Processing of Speech Recognizer Pre- emphasis Window FFT High pass filter to compensate for higher frequency roll off in human speech Separate speech signal into frames Apply window to smooth edges of framed speech signal Transform signal from time domain to frequency domain Human ear perceives sound based on frequency content

Front-End Processing of Speech Recognizer Pre- emphasis Window FFT Mel-Scale High pass filter to compensate for higher frequency roll off in human speech Separate speech signal into frames Apply window to smooth edges of framed speech signal Transform signal from time domain to frequency domain Human ear perceives sound based on frequency content Convert linear scale frequency (Hz) to logarithmic scale (mel-scale)

Front-End Processing of Speech Recognizer Pre- emphasis Window FFT Mel-Scale log High pass filter to compensate for higher frequency roll off in human speech Separate speech signal into frames Apply window to smooth edges of framed speech signal Transform signal from time domain to frequency domain Human ear perceives sound based on frequency content Convert linear scale frequency (Hz) to logarithmic scale (mel-scale) Take the log of the magnitudes (multiplication becomes addition) to allow separation of signals

Front-End Processing of Speech Recognizer Pre- emphasis Window FFT Mel-Scale log IFFT High pass filter to compensate for higher frequency roll off in human speech Separate speech signal into frames Apply window to smooth edges of framed speech signal Transform signal from time domain to frequency domain Human ear perceives sound based on frequency content Convert linear scale frequency (Hz) to logarithmic scale (mel-scale) Take the log of the magnitudes (multiplication becomes addition) to allow separation of signals Inverse of FFT to transform to Cepstral Domain… the result is the set of “features”

Speech Analysis and Sound Effects (SASE) Project Implements front-end pre-processing (feature extraction) Graphical User Interface (GUI) Speech input – Record and save audio – Read sound file (*.wav, *.ulaw, *.au) Graphs the entire audio signal Processes user selected speech frame and displays graphs of output for each stage Displays spectrogram on entire signal and user selected 3-second sample Modifies speech with user-configurable audio effects

MATLAB Code Graphical User Interface (GUI) – GUIDE (GUI Development Environment) – Callback functions Front-end speech processing – Modular functions for reusability – Graphs of output for each stage Sound Effects – Echo, Reverb, Flange, Chorus, Vibrato, Tremolo, Voice Changer

GUI Components Plotting Axes Buttons

SASE Lab Demo Record, play, save audio to file, open existing audio files Select and process speech frame, display graphs of stages of front-end processing Display spectrogram for entire speech signal or user selectable 3 second sample Play speech – all or selected 3 sec sample Show differences in certain sounds in spectrogram and the features ex: “a e i o u” so audience understands how these graphs tell us about the sounds Apply sound effects, show user configurable parameters Graphs spectrogram and speech processing on sound effects – Show echo effect in spectrogram Use as teaching tool

Future Work on SASE Lab Audio Effect - Pitch extraction Noise Filtering

Applications of Signal Processing in High Schools Convey the relevance and importance of math to high school students Bring knowledge of technological innovation and academic research into high school classrooms Provide opportunity for students to acquire technical knowledge and analytical skills through hands-on exploration of real-world applications in the field of Signal Processing Encourage students to pursue higher education and careers in STEM fields

Unit Plan: Speech Processing Collection of lesson plans introduce high school students to fundamentals of speech and sound processing Connections to Pre-Calculus Course, NGSSS and Common Core Mathematics Standards – Mathematical Modeling – Trigonometric Functions – Complex Numbers in Rectangular and Polar Form – Function Operations – Logarithmic Functions – Sequences and Series – Matrices

Unit Plan: Speech Processing Cohesive unit of four lessons 1.The Sound of a Sine Wave 2.Frequency Analysis 3.Sound Effects 4.SASE Lab Hand-on lessons – Teacher notes – MATLAB projects

Unit Introduction Students research, explore, and discuss current applications of speech and audio processing

Lesson 1: The Sound of a Sine Wave Modeling sound as a sinusoidal function Concepts covered: – Continuous vs. Discrete Functions – Frequency of Sine Wave – Composite signals Connections to real-world applications: – Synthesis of digital speech and music

Lesson 1: The Sound of a Sine Wave Student MATLAB Project – Create discrete sine waves with given frequencies – Create composite signal of the sine waves – Plot graphs and play sounds of the sine waves – Analyze the effect of frequency and amplitude on the graphs and the sounds of the sine functions

Lesson 1: The Sound of a Sine Wave % plays C4, C5, C6 - frequencies double between octave % sine_sound_sample(8000, , , , 1);

Lesson 1: The Sound of a Sine Wave Project Extension – Music Notes % twinkle twinkle little star % music = 'C4Q C4Q G4Q G4Q A4Q A4Q G4H '; % super mario bros % music = 'FS4+EN5,Q E4,Q E4,Q RR,Q E4,Q RR,Q C4,Q E4,Q RR,Q G4,Q';

Lesson 1: The Sound of a Sine Wave Project Extension – Vowel Sounds – Vowel sounds characterized by lower three formants – aa “Bob” aa_m = struct('F1', 750, 'F2', 1150, 'F3', 2400, 'Duration', 215, 'W1', 1, 'W2', 1, 'W3', 1); – iy “Beat” iy_m = struct('F1', 340, 'F2', 2250, 'F3', 3000, 'Duration', 196, 'W1', 1, 'W2', 30, 'W3', 30);

Lesson 2: Frequency Analysis Use of Fourier Transformation to transform functions from time domain to frequency domain Concepts covered: – Modeling harmonic signals as a series of sinusoids – Sine wave decomposition – Fourier Transform – Euler’s Formula – Frequency spectrum Connections to real-world applications: – Speech processing and recognition

Lesson 2: Frequency Analysis Student MATLAB Project – Create a composite signal with the sum of harmonic sine waves – Plot graphs and play sounds of the sine waves – Compute the FFT of the composite signal – Plot and analyze the frequency spectrum

Lesson 2: Frequency Analysis % create five harmonic signals with fundamental frequency 262 % square_wave(8000, 262, 1, 1024);

Lesson 3: Sound Effects Time-delay based sound effects Concepts covered: – Discrete functions – Time-delay functions – Function operations Connections to real-world applications: – Digital music effects and speech sound effects

Lesson 3: Sound Effects Student MATLAB Project – Read a *.wav file – Use a delay function to modify the signal with an echo sound effect – Plot graphs and play sounds of the signals – Analyze the effect of changing parameters on the graphs and the sounds of the functions

Lesson 3: Sound Effects % echo at 50 m with reflection coefficient = 0.5 % echo_effect('becky.wav', 50, 0.5);

Lesson 4: SASE Lab Guided inquiry of SASE Lab program – Experiment with different sounds inputs – Analyze spectrogram – Make connections to previous lessons

Unit Conclusion Students summarize and reflect on lessons in a presentation and report/poster

References Ingle, Vinay K., and John G. Proakis. Digital signal processing using MATLAB. 2nd ed. Toronto, Ont.: Nelson, Oppenheim, Alan V., and Ronald W. Schafer. Discrete-time signal processing. 3rd ed. Upper Saddle River: Pearson, Weeks, Michael. Digital signal processing using MATLAB and wavelets. Hingham,Mass.: Infinity Science Press, Timeline of Speech Recognition.

AEGIS website: Contacts: – Becky Dowell, – Dr. Veton Këpuska, – Jacob Zurasky, AEGIS Project

Thank you! Questions?