Computational Spectro- temporal Auditory Model Taishih Chi June 29, 2003.

Slides:



Advertisements
Similar presentations
DCSP-13 Jianfeng Feng
Advertisements

Physical modeling of speech XV Pacific Voice Conference PVSF-PIXAR Brad Story Dept. of Speech, Language and Hearing Sciences University of Arizona.
Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.
Fourier Transform (Chapter 4)
Reconstructing Speech from Human Auditory Cortex Alex Francois-Nienaber CSC2518 Fall 2014 Department of Computer Science, University of Toronto.
Auditory Nerve Laboratory: What was the Stimulus? Bertrand Delgutte HST.723 – Neural Coding and Perception of Sound.
Introduction The aim the project is to analyse non real time EEG (Electroencephalogram) signal using different mathematical models in Matlab to predict.
Rhythmic Similarity Carmine Casciato MUMT 611 Thursday, March 13, 2005.
Cortical Encoding of Natural Auditory Scenes Brigid Thurgood.
1 Audio Compression Techniques MUMT 611, January 2005 Assignment 2 Paul Kolesnik.
Time and Frequency Representations Accompanying presentation Kenan Gençol presented in the course Signal Transformations instructed by Prof.Dr. Ömer Nezih.
數位信號處理專題 : 聽語資訊處理 NSL tool 相關參數介紹. Auditory model The auditory model was composed of early and central stages: 1. The early stage converts the sound waveform.
Introduction to Wavelets
Eurotev Feedback Loop for the mechanical Stabilisation Jacques Lottin* Laurent Brunetti*
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Audio and Music Representations (Part 2) 1.
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Overview of MIR Systems Audio and Music Representations (Part 1) 1.
C enter for A uditory and A coustic R esearch Representation of Timbre in the Auditory System Shihab A. Shamma Center for Auditory and Acoustic Research.
SPECTRO-TEMPORAL POST-SMOOTHING IN NMF BASED SINGLE-CHANNEL SOURCE SEPARATION Emad M. Grais and Hakan Erdogan Sabanci University, Istanbul, Turkey  Single-channel.
Michael P. Kilgard Sensory Experience and Cortical Plasticity University of Texas at Dallas.
Page 0 of 23 MELP Vocoders Nima Moghadam SN#: Saeed Nari SN#: Supervisor Dr. Saameti April 2005 Sharif University of Technology.
WAVELET (Article Presentation) by : Tilottama Goswami Sources:
By Sarita Jondhale1 Signal Processing And Analysis Methods For Speech Recognition.
Jacob Zurasky ECE5526 – Spring 2011
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
1 PATTERN COMPARISON TECHNIQUES Test Pattern:Reference Pattern:
Basics of Neural Networks Neural Network Topologies.
ICASSP Speech Discrimination Based on Multiscale Spectro–Temporal Modulations Nima Mesgarani, Shihab Shamma, University of Maryland Malcolm Slaney.
Chapter 7: Loudness and Pitch. Loudness (1) Auditory Sensitivity: Minimum audible pressure (MAP) and Minimum audible field (MAF) Equal loudness contours.
A Sparse Non-Parametric Approach for Single Channel Separation of Known Sounds Paris Smaragdis, Madhusudana Shashanka, Bhiksha Raj NIPS 2009.
Gammachirp Auditory Filter
Overview ► Recall ► What are sound features? ► Feature detection and extraction ► Features in Sphinx III.
Signals and Systems Using MATLAB Luis F. Chaparro
Auditory Neuroscience 1 Spatial Hearing Systems Biology Doctoral Training Program Physiology course Prof. Jan Schnupp HowYourBrainWorks.net.
Ch4 Short-time Fourier Analysis of Speech Signal z Fourier analysis is the spectrum analysis. It is an important method to analyze the speech signal. Short-time.
Sub-Nyquist Reconstruction Characterization Presentation Winter 2010/2011 By: Yousef Badran Supervisors: Asaf Elron Ina Rivkin Technion Israel Institute.
Analysis of spectro-temporal receptive fields in an auditory neural network Madhav Nandipati.
Fourier Transform.
WAVELET AND IDENTIFICATION WAVELET AND IDENTIFICATION Hamed Kashani.
SPHSC 462 HEARING DEVELOPMENT Overview Review of Hearing Science Introduction.
The Discrete Fourier Transform
Sundeep Teki Wellcome Trust Centre for Neuroimaging University College London, UK Auditory figure-ground segregation in complex acoustic scenes.
Fourier Transform (Chapter 4) CS474/674 – Prof. Bebis.
Speech Processing Dr. Veton Këpuska, FIT Jacob Zurasky, FIT.
Speech Enhancement Algorithm for Digital Hearing Aids
Instructor: Mian Shahzad Iqbal
LOW-COMPLEXITY ARBITRARY SAMPLE-RATE CONVERTER
PATTERN COMPARISON TECHNIQUES
Carmine Casciato MUMT 611 Thursday, March 13, 2005
State Space Representation
Speech Processing AEGIS RET All-Hands Meeting
Speech Processing AEGIS RET All-Hands Meeting
Copyright © American Speech-Language-Hearing Association
Carmine Casciato MUMT 611 Thursday, March 13, 2005
Motivation EECS 20 Lecture 1 (January 17, 2001) Tom Henzinger.
CS 188: Artificial Intelligence Fall 2008
Contrast Gain Control in Auditory Cortex
State Space Analysis UNIT-V.
Volume 25, Issue 15, Pages (August 2015)
Sam Norman-Haignere, Nancy G. Kanwisher, Josh H. McDermott  Neuron 
LECTURE 18: FAST FOURIER TRANSFORM
Sound shadow effect Depends on the size of the obstructing object and the wavelength of the sound. If comparable: Then sound shadow occurs. I:\users\mnshriv\3032.
Analysis of Audio Using PCA
Speech Processing Dec. 11, 2006 YOUNG-CHAN LEE
John H.L. Hansen & Taufiq Al Babba Hasan
Govt. Polytechnic Dhangar(Fatehabad)
INTRODUCTION TO THE SHORT-TIME FOURIER TRANSFORM (STFT)
Tuning to Natural Stimulus Dynamics in Primary Auditory Cortex
Measuring the Similarity of Rhythmic Patterns
LECTURE 18: FAST FOURIER TRANSFORM
Madhav Nandipati pd. 6 Third Quarter Presentation
Presentation transcript:

Computational Spectro- temporal Auditory Model Taishih Chi June 29, 2003

Auditory Model Overview – two stage processing Model description and formulation Examples of representations Reconstruction from model output representations Discussions Spectral Estimation Early Auditory Spectral Analysis Primary Cortex (A1) Sound Auditory Spectrum Cortical Representation

Auditory Model Overview Temporal dynamics reduction Monaural model Two stage functional model –Early stage (spectrum estimation) –Cortical stage (spectrum analysis)

Early stage Mathematical Formulation

Early Stage MATLAB Implementation Matlab ToolBox Usage: y final = wav2aud(s, [frmlen, tc, fac, shft], filt); s: acoustic input signal y final : auditory spectrogram; N(time) x M(freq.) CF = 440 * 2.^ ((-31:97)/24 + shft);

Cortical stage Spectrotemporal Receptive Field

Cortical stage Model Implementation

Cortical stage Mathematical Formulation where then the spectrotemporal cortical response:

Cortical stage Mathematical Formulation (cont’d) Consider the complex wavelet transform where then

Cortical stage Cortical Representation of Speech

Cortical Magnitude Representation of Speech

Cortical Stage MATLAB Implementation Matlab ToolBox Usage: cr = aud2cor(y, para1, rv, sv, fname, DISP); cr: 4D cortical representation (scale-rate(up- down)-time-freq.) y: auditory spectrogram, N(time) x M(freq.) para1 = [paras FULLT FULLX BP],paras:see WAV2AUD FULLT (FULLX): fullness of temporal (spectral) margin. BP: pure bandpass indicator. rv: rate vector in Hz, e.g., 2.^(1:.5:5). sv: scale vector in cyc/oct, e.g., 2.^(-2:.5:3).