The Role of Temporal Fine Structure Processing

Slides:



Advertisements
Similar presentations
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Advertisements

Perception Chapter 11: Hearing and Listening
Introduction to MP3 and psychoacoustics Material from website by Mark S. Drew
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Timbre perception. Objective Timbre perception and the physical properties of the sound on which it depends Formal definition: ‘that attribute of auditory.
Periodicity and Pitch Importance of fine structure representation in hearing.
Hearing and Deafness 2. Ear as a frequency analyzer Chris Darwin.
CS 551/651: Structure of Spoken Language Lecture 11: Overview of Sound Perception, Part II John-Paul Hosom Fall 2010.
Hearing and Deafness Outer, middle and inner ear.
Pitch organisation in Western tonal music. Pitch in two dimensions Pitch perception in music is often thought of in two dimensions, pitch height and pitch.
Lecture 8  Perceived pitch of a pure tone  Absolute pitch  Midterm review Instructor: David Kirkby
Pitch Perception.
Chapter 6: Masking. Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Source Localization in Complex Listening Situations: Selection of Binaural Cues Based on Interaural Coherence Christof Faller Mobile Terminals Division,
A.Diederich– International University Bremen – Sensation and Perception – Fall Frequency Analysis in the Cochlea and Auditory Nerve cont'd The Perception.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Hearing & Deafness (4) Pitch Perception 1. Pitch of pure tones 2. Pitch of complex tones.
1 Hearing Sound is created by vibrations from a source and is transmitted through a media (such as the atmosphere) to the ear. Sound has two main attributes:
Spectral centroid 6 harmonics: f0 = 100Hz E.g. 1: Amplitudes: 6; 5.75; 4; 3.2; 2; 1 [(100*6)+(200*5.75)+(300*4)+(400*3.2)+(500*2 )+(600*1)] / = 265.6Hz.
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners L.M. Litvak, A.J. Spahr, A.A. Saoji,
1 New Technique for Improving Speech Intelligibility for the Hearing Impaired Miriam Furst-Yust School of Electrical Engineering Tel Aviv University.
Sound source segregation (determination)
Fundamentals of Perceptual Audio Encoding Craig Lewiston HST.723 Lab II 3/23/06.
Hearing.
Measuring the brain’s response to temporally modulated sound stimuli Chloe Rose Institute of Digital Healthcare, WMG, University of Warwick, INTRODUCTION.
Beats and Tuning Pitch recognition Physics of Music PHY103.
Prof. Brian L. Evans Dept. of Electrical and Computer Engineering The University of Texas at Austin EE445S Real-Time Digital Signal Processing Lab Fall.
Linical & Experimental Audiology Speech-in-noise screening tests by internet; improving test sensitivity for noise-induced hearing loss Monique Leensen.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Pure Tone Audiometry most commonly used test for evaluating auditory sensitivity delivered primarily through air conduction and bone conduction displayed.
1 Loudness and Pitch Be sure to complete the loudness and pitch interactive tutorial at … chophysics/pitch/loudnesspitch.html.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Hearing & Aging Or age brings wisdom and other bad news.
Applied Psychoacoustics Lecture 3: Masking Jonas Braasch.
Hearing Research Center
SOUND PRESSURE, POWER AND LOUDNESS MUSICAL ACOUSTICS Science of Sound Chapter 6.
Human Capabilities Part – I. Hearing (Chapter 6*) Prepared by: Ahmed M. El-Sherbeeny, PhD 1.
P. N. Kulkarni, P. C. Pandey, and D. S. Jangamashetti / DSP 2009, Santorini, 5-7 July DSP 2009 (Santorini, Greece. 5-7 July 2009), Session: S4P,
MASKING BASIC PRINCIPLES CLINICAL APPROACHES. Masking = Preventing Crossover Given enough intensity any transducer can stimulate the opposite cochlea.
Pitch Perception Or, what happens to the sound from the air outside your head to your brain….
The Ear As a Frequency Analyzer Reinier Plomp, 1976.
Applied Psychoacoustics Lecture 3: Masking
Listeners weighting of cues for lateral angle: The duplex theory of sound localization revisited E. A. MacPherson & J. C. Middlebrooks (2002) HST. 723.
1 Hearing Sound is created by vibrations from a source and is transmitted through a media (such as the atmosphere) to the ear. Sound has two main attributes:
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Predicting the Intelligibility of Cochlear-implant Vocoded Speech from Objective Quality Measure(1) Department of Electrical Engineering, The University.
IIT Bombay 17 th National Conference on Communications, Jan. 2011, Bangalore, India Sp Pr. 1, P3 1/21 Detection of Burst Onset Landmarks in Speech.
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
Applied Psychoacoustics Lecture 4: Pitch & Timbre Perception Jonas Braasch.
Fletcher’s band-widening experiment (1940)
SOUND PRESSURE, POWER AND LOUDNESS
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Fletcher’s band-widening experiment (1940) Present a pure tone in the presence of a broadband noise. Present a pure tone in the presence of a broadband.
SPATIAL HEARING Ability to locate the direction of a sound. Ability to locate the direction of a sound. Localization: In free field Localization: In free.
Speech Audiometry Lecture 8.
Fletcher’s band-widening experiment (1940)
PSYCHOACOUSTICS A branch of psychophysics
Precedence-based speech segregation in a virtual auditory environment
Speech recognition with amplitude and frequency modulations: Implications for cochlear implant design Fan-Gang Zeng Kaibao Nie Ginger Stickney Ying-Yee.
Consistent and inconsistent interaural cues don't differ for tone detection but do differ for speech recognition Frederick Gallun Kasey Jakien Rachel Ellinger.
Ana Alves-Pinto, Joseph Sollini, Toby Wells, and Christian J. Sumner
A map of periodicity orthogonal to frequency representation in the cat auditory cortex.  Gerald Langner, Ben Godde, and Hubert R. Dinse Examples of auditory.
The influence of hearing loss and age on sensitivity to temporal fine structure Brian C.J. Moore Department of Experimental Psychology, University of Cambridge,
Mark Sayles, Ian M. Winter  Neuron 
HEARING ASSESSMENT: PURE TONE AUDIOMETRY
Hearing Loss, Hearing Aids and Cochlear Implants
Speech Communications
Auditory, Tactical, and Olfactory Displays
Presentation transcript:

The Role of Temporal Fine Structure Processing Pronounced by: Hwang se mi

List ABSTRACT Introduction THE ROLE OF TFS IN PITCH PERCEPTION MASKING AND THE ROLE OF TFS IN DIP LISTENING THE ROLE OF TFS IN SPEECH PERCEPTION THE EFFECT OF HEARING LOSS ON THE ABILITY TO USE TFS INFORMATION POSSIBLE REASONS FOR THE EFFECT OF COCHLEAR HEARING LOSS ON SENSITIVITY TO TFS

ABSTRACT Complex broadband sounds (auditory filters) -> relatively narrowband singnals (E and TFS) What is the TFS? - based on individual phase lock Looking for role of TFS In masking Pitch perception Speech perception

Introduction By Moore in 2002 Tonotopy is simulated by short term Fourier analysis What’s mean Magnitude? How to catch the phase? (Hilbert transform) – Bracewell 1986 length of the vector : envelope at that time rate of rotation: frequency

* Hilbert transform Hilbert transform 의 필요성 인과성 조건 시간영역에서의 인과성 주파수 영역에서의 인과성

Glasberg and Moore 1990; Moore 2003 (Hilbert transform application)

TFS’ Applicatoin range in human Mammal <= 4~5 KHz > (weaken) Useful phase lock : 10KHz possible (Heinz et al. 2001) Human : upper limit is not confirmed

THE ROLE OF TFS IN PITCH PERCEPTION Pure tone vs. complex tone in FTS (Moore (2003) and Plack and Oxenham (2005)) Duration (steady pure tone vs. short tone) – ((Heinz et al. 2001)) Steady complex tone vs(resolved). unresolved harmonics below 14(record) vs. upper 14 (TFS < E), weaken (Moore et al. 2006a)

FM discrimination in TFS FM <= 5Hz (fail to predict, carrier 의 phase locking, using place cue) FM >= 10Hz (FM mixed with AM) In high frequency (e.g., 4000Hz) – ”sluugish”, less effective interaural phase differences or interaural correlation ((Blauert 1972; Grantham and Wightman 1978, 1979)) Certain complex stimuli ((Siveke et al. 2008)) Modulation > detection threshold

MASKING AND THE ROLE OF TFS IN DIP LISTENING 1. What’s “listen in the dips”? 2. Moore와 Glasberg 의 실험 Masking intensity: 80dBSPL Fm : 250, 1000, 3000, 5275 Hz Signal Freq . : Fm * 1.8 Beat rate: 4,8,16,32,64 Hz (masker release) [result: 4Hz : avg. 25dB , 64Hz: avg. 10dB ] Result: It is possible effective dip listening masker and sunusoid frequency precise in phase locking. In High freq. (incorrect phase lock, depend on a little masker beat rate.)

THE ROLE OF TFS IN SPEECH PERCEPTION using vocoder for getting E and TFS(Dudley 1939). What’s “ E-speech” E is extrated from each band. (noise vocoder ->noise band modulate) (tone vocoder-> sinusoid centered at the freq. of band) good intelligibility for speech in quiet ((Shannon et al. 1995))

What’s TFS –Speech? Bandpass signal/ E = FM sinusoidal carrier 같은 Rms 진폭을 갖기 위해 long term 진폭으로 scale 된 후 결합

Distort -> training Distinguish non-processed speech TFS-Speech Distort -> training Distinguish non-processed speech In Remove E -> econstructed (called cochlear filters by Ghitza) Reconstructed E make a contribution to the intelligibility of TFS-speech (though envelope cues alone are not sufficient) * Minimal speech intelligibility ( bandwidth <= 4 ERB, 0.08~8.02KHz, band number >= 8) Learning is high intelligibility with TFS-speech in conjunction with E

Hopkins et al. (2008) TFS in speech perception -J: ERB n-wide band number Included TFS and E (step 4 [0~32]) -Other bands noise or tone vocoded conveyed E -SRT required: 50% High freq. band vocoded. Measured non-processed signal Situation: competing =talker subject number: 9 of nomal hearing Error rate: standard deviation Result: -j= As 0->32 (added TFS): speech identification was increased. (SRT decreased as 15dB) -Attribution to speaker indentification and tonal language

THE EFFECT OF HEARING LOSS ON THE ABILITY TO USE TFS INFORMATION 1) 낮은 비율에서 FM탐지 (Lacher-Fougère and Demany 1998; Moore and Skrodzka 2002), 2) lateralizaton (based on interaural phase difference) (Lacher-Fougère and Demany 2005) 3) 복합음에서 기본주파수 유무에 따른 discrimination (Moore and Moore 2003a)

Hopkins & Moore (2007) experiments in moderate hearing loss discriminate a harmonic complex tone (F0=100, 200, or 400 Hz) the tones contained many components Passed though a fixed bandpass filter centered on the upper (unresolved) harmonics bandpass filter was centered on the 11th harmonic harmonic and frequency-shifted is small. (without hearing loss)

In normal E is the same and shifted tone as having a higher pitch than the harmonic tone (de Boer 1956; Moore and Moore 2003b) The smallest detectable frequency shift (index d′=1) : 0.05F0 Even untrained normally hearing 0.2F0 or better (Moore and Sek 2008). moderate cochlear hearing performed very poorly the lowest center frequency tested by Hopkins and Moore was 1,100 Hz. -> 1000Hz 밑에서 청력손실이 있는 사람들은 일반적으로 주변에 배경소음이 유동적일 때 speech를 이해하기 어렵다

Speech perception studies measured identification scores for unprocessed, E, and TFS-speech in quiet for three groups(nomal, young moderate hearing loss, elderly moderate hearing loss) Lorenzi et al. (2006) ->nomal hearing trained -> 90% correct with E- and TFS-speech -> moderate hearing loss-> poorly with TFS-speech, the younger hearing-impaired group : correlation r=0.83

Lorenzi et al. (2008) measured the identification of E- and TFS-speech in quiet high-frequency mild to-severe hearing loss and normal (G20 dB HL) audiometric thresholds below 2 kHz hearing-impaired: (6.25%) normal-hearing: 20–50%

experiment of Hopkins et al. (2008), moderate cochlear hearing loss: j=0 ->j=32 5dB improved Nomal : 15dB individual differences are not at present clear TFS information was not correlated range 250 to 4,000 Hz.

cochlear implant systems Nie et al TFS information to speech recognition in cochlear implant adding this FM signal improved performance (71%)

why cochlear hearing loss reduced ability to process TFS information? POSSIBLE REASONS FOR THE EFFECT OF COCHLEAR HEARING LOSS ON SENSITIVITY TO TFS why cochlear hearing loss reduced ability to process TFS information? 1. Reduced precision of phase locking In complex sound, effected by two tone suppression (Miller et al. 1997) 2. response at different points along the basilar membrane -> affect TFS correlation

3. More complex and more rapidly varying TFS 4. Hearing loss may produce a shift in frequency-place mapping((Liberman and Dodds 1984; Sellick et al. 1982) 5. There may be central changes such as loss of inhibition,

경청해주셔서 감사합니다. The end