INTRODUCTION Human hearing and speech cover a wide frequency range from 20 to 20,000 Hz, but only a 300 to 3,400 Hz range is typically used for speech.

Slides:



Advertisements
Similar presentations
14-1 Chapter 14 Risk and Managerial (Real) Options in Capital Budgeting © Pearson Education Limited 2004 Fundamentals of Financial Management, 12/e Created.
Advertisements

Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical.
Frequency Band-Importance Functions for Auditory and Auditory- Visual Speech Recognition Ken W. Grant Walter Reed Army Medical Center Washington, D.C.
Learning Introductory Signal Processing Using Multimedia 1 Outline Overview of Information and Communications Some signal processing concepts Tools available.
presented by: Betsy Moog Brooks, MS-CED Expectations for Children Receiving a Cochlear Implant at Age One The Moog Center for Deaf Education St. Louis,
Children with Mild and Unilateral Hearing Impairment Current management and outcome measures Kirsti Reeve Ph.D. Developmental Disabilities Institute Wayne.
Introduction to Cochlear Implants for EI Service Providers Roxanne J. Aaron, MA, CCC-A, FAAA The Moog Center for Deaf Education March 2005.
Describing Data: Measures of Central Tendency
The Fully Networked Car Geneva, 4-5 March Automotive Speech Enhancement of Today: Applications, Challenges and Solutions Tim Haulick Harman/Becker.
Testing the perception of speech in noise in children: A test with no verbal response Intelligibility and Quality of Speech in Noise Workshop, UCL 9 th.
Topics discussed in this section:
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Localisation and speech perception UK National Paediatric Bilateral Audit. Helen Cullington 11 April 2013.
SOUND PRESSURE, POWER AND LOUDNESS MUSICAL ACOUSTICS Science of Sound Chapter 6.
Figures for Chapter 9 Prescription
Acousteen, Herman Steeneken 1 Past, Present and Future of STI Herman J. M. Steeneken (
ECE 424 – Introduction to VLSI
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
The Use of Ultrasonic Bone Conduction to Treat Tinnitus Josh Vicari July 23, 2007.
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
Sound source segregation Development of the ability to separate concurrent sounds into auditory objects.
Cochlear Implants The cochlear implant is the most significant technical advance in the treatment of hearing impairment since the development of the hearing.
The nature of sound Types of losses Possible causes of hearing loss Educational implications Preparing students for hearing assessment.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
RERC-Telecommunications Access – funded by NIDRR RERC-Hearing Enhancement – funded by NIDRR.
Using Fo and vocal-tract length to attend to one of two talkers. Chris Darwin University of Sussex With thanks to : Rob Hukin John Culling John Bird MRC.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Bone Anchored Hearing Aid or Cochlea Implant?
4aPP17. Effect of signal frequency uncertainty for random multi-burst maskers Rong Huang and Virginia M. Richards Department of Psychology, University.
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners L.M. Litvak, A.J. Spahr, A.A. Saoji,
1 New Technique for Improving Speech Intelligibility for the Hearing Impaired Miriam Furst-Yust School of Electrical Engineering Tel Aviv University.
Sound source segregation (determination)
Acoustical Society of America, Chicago 7 June 2001 Effect of Reverberation on Spatial Unmasking for Nearby Speech Sources Barbara Shinn-Cunningham, Lisa.
1 Recent development in hearing aid technology Lena L N Wong Division of Speech & Hearing Sciences University of Hong Kong.
Audiology Training Course ——Marketing Dept. Configuration of the ear ① Pinna ② Ear canal ③ Eardrum ④ Malleus ⑤ Incus ⑥ Eustachian tube ⑦ Stapes ⑧ Semicircular.
Amplification/Sensory Systems SPA 4302 Summer 2007.
Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky
What they asked... What are the long term effects of fitting bilateral amplification simultaneously (both aids on Day #1) versus sequentially (the second.
Creating sound valuewww.hearingcrc.org Kelley Graydon 1,2,, Gary Rance 1,2, Dani Tomlin 1,2 Richard Dowell 1,2 & Bram Van Dun 1,4. 1 The HEARing Cooperative.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Uniform Quantization It was discussed in the previous lecture that the disadvantage of using uniform quantization is that low amplitude signals are drastically.
Speech Based Optimization of Hearing Devices Alice E. Holmes, Rahul Shrivastav, Hannah W. Siburt & Lee Krause.
Frank E. Musiek, Ph.D., Jennifer Shinn, M.S., and Christine Hare, M. A.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
creating sound value TM Spatial release from masking deficits in hearing-impaired people: Is inadequate audibility the problem? Helen.
Dynamic Aspects of the Cocktail Party Listening Problem Douglas S. Brungart Air Force Research Laboratory.
‘Missing Data’ speech recognition in reverberant conditions using binaural interaction Sue Harding, Jon Barker and Guy J. Brown Speech and Hearing Research.
Hearing Research Center
SOUND PRESSURE, POWER AND LOUDNESS MUSICAL ACOUSTICS Science of Sound Chapter 6.
IIT Bombay {pcpandey,   Intro. Proc. Schemes Evaluation Results Conclusion Intro. Proc. Schemes Evaluation Results Conclusion.
BENEFIT OF BINAURAL HEARING WITH COCHLEAR IMPLANT AND HEARING AID Osaka Prefecture (840 million population) Takeshi Kubo, M.D. Department of Otolaryngology.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Predicting the Intelligibility of Cochlear-implant Vocoded Speech from Objective Quality Measure(1) Department of Electrical Engineering, The University.
Evaluation of a Binaural FMV Beamforming Algorithm in Noise Jeffery B. Larsen, Charissa R. Lansing, Robert C. Bilger, Bruce Wheeler, Sandeep Phatak, Nandini.
Figures for Chapter 8 Candidacy Dillon (2001) Hearing Aids.
Hearing + Perception, part 2 April 10, 2013 Hearing Aids et al. Generally speaking, a hearing aid is simply an amplifier. Old style: amplifies all frequencies.
SOUND PRESSURE, POWER AND LOUDNESS
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
D I S C U S S I O N & C O N C L U S I O N
Speech recognition with amplitude and frequency modulations: Implications for cochlear implant design Fan-Gang Zeng Kaibao Nie Ginger Stickney Ying-Yee.
Consistent and inconsistent interaural cues don't differ for tone detection but do differ for speech recognition Frederick Gallun Kasey Jakien Rachel Ellinger.
The influence of hearing loss and age on sensitivity to temporal fine structure Brian C.J. Moore Department of Experimental Psychology, University of Cambridge,
Fang Du, Dr. Christina L. Runge, Dr. Yi Hu. April 21, 2018
1-channel 2-channel 4-channel 8-channel 16-channel Original
RESULTS: Individual Data
Presentation transcript:

INTRODUCTION Human hearing and speech cover a wide frequency range from 20 to 20,000 Hz, but only a 300 to 3,400 Hz range is typically used for speech communication devices including telephones (1). Traditionally, sound frequencies below 300 Hz have been considered redundant, but recent studies have shown that low-frequency sounds can significantly improve cochlear implant performance in noise, particularly when the noise is a competing voice (2-4). Here we used cochlear implant simulations in normal hearing listeners to study the size and mechanisms of this improvement. We found that largely unintelligible low-frequency sounds considerably enhanced speech recognition in noise.METHODS Sixteen normal hearing subjects participated in the study (8 for low-frequency EAS simulation test, 8 for high-frequency EAS). Each subject listened to a 4-channel CI simulation (5) with residual acoustic hearing at either low frequencies ( 2000, 4000, or 6000 Hz). The acoustic sound was combined with the CI simulation either monaurally to simulate the standard Electro- Acoustic-Stimulation (EAS) or binaurally (both diotic and dichotic methods included) to simulate the CI and hearing aid stimulation. HINT sentences (6), in the presence of a competing voice, were used to measure the speech reception threshold (SRT), in terms of signal-to-noise ratio (SNR). An additional control simulating a 5-channel CI was used to assess whether potential improvement associated with acoustic hearing was equivalent to the result of a CI simulation simply with more channels.  SRT: SNR at which subjects score 50% correct on sentences FIGURE 1: Schematic view of all the processing strategies and frequency divisions ( 1a ) ( 1b ) Acoustic stimulation 4000Hz Row 1 EAS CI simulation + acoustic cues Row 2 Control 1 5-channel CI simulation (Can improved performance be attributed to an effect of more channels?) Row 3 Control 2 4-channel CI simulation FIGURE 2: Speech spectrograms of original and processed Speech-in-Noise Stimuli  A (male voice, target) + B (female voice, masker) = C (combined original signal)  D (acoustic) + E (electric, 4-channel CI) = F (EAS) ( 2a ) ( 2b ) Acoustic stimulation 4000HzRESULTS Although extreme low- ( 4000 Hz) acoustic sounds had negligible intelligibility when presented alone, they both improved CI performance, including bilateral implant simulation, when combined. While the method of combination (monaural, diotic, or dichotic) had no significant effects, the degrees of improvement were different between low and high-frequency sounds. Performance of low-frequency sound conditions significantly surpassed that of the 5-channel CI control, whereas high-frequency sound produced similar performance to 5-channel CI. These results suggest a unique mechanism underlying the improvement of cochlear implant performance by addition of low-frequency information. FIGURE 3: SRT of EAS with low-frequency acoustic sounds A control study of perception of the residual low- and high-frequency sounds showed that the intelligibility was 0, 10, and 11% correct for sentences and 5, 51, and 75% correct for keywords, with the low-pass cutoff frequency at 250, 500, and 1,000 Hz, respectively; 56, 3, and 0% correct for sentences and 85, 18, and 14% for keywords, with the high-pass cutoff frequency at 2,000, 4,000, and 6,000 Hz. We found the improvement in SRT of EAS simulation (7-15 dB increase in all EAS conditions over one or bilateral implant ) similar to some previous studies done on real CI+HA subjects. (see FIGURE 4) FIGURE 4: Sentence recognition score as a function of SNR (3) This real CI subject study (by Kong et al.) used a male masker and female speech (3). The figure to the right shows the HA alone provides very limited perception, but very high perception when combined with CI. Notice the similarity between this real subject data and the simulation data in FIGURE 3. Y.Y. Kong et al FIGURE 5: SRT of low- and high- frequency EAS The 500 Hz (low-frequency acoustic) EAS simulation gave the best SRT result, outscoring the 5-channel CI control by 8 dB. On the contrary, the 4000 Hz EAS (high- frequency acoustic) is 4 dB less efficient than the 5-channel CI. The low-frequency acoustic sound shows more benefits in speech-in-noise test, in terms of the relative contribution to the combined speech intelligibility.DISCUSSION The low- frequency sound improved the SRT by 7-15 dB over the “one implant” and “bilateral implant’’ controls [p < 0.05]; this cannot be explained by any current theories. The high-frequency sound did not improve the SRT as much as the low-frequency counterpart did; the 500 Hz low-pass EA simulation gave a 9 dB benefit over the 4000 Hz high-pass EA simulation. CONCLUSIONS For simulation, EAS achieves better performance than a single cochlear implant or bilateral implants in speech-in-noise tests. Low-frequency acoustic sound ( 4000 Hz) does. The present result suggests a synergetic interaction between the low- and high- frequency sounds not in the ear but in the brain. We hypothesize that the low- frequency sound helps the brain segregate and group the high-frequency temporal envelopes into different sound sources. FUTURE DIRECTIONS Similar experiments are being conducted to see whether these simulation results can be extended to actual bimodal and bilateral cochlear implant users.REFERENCES 1. H. Fletcher, Speech and Hearing in Communication, Bell Laboratories Series (D. Van Nostrand Company, Inc.,New York, ed. Second, 1953). 2. C. W. Turner, B. J. Gantz, C. Vidal, A. Behrens, B. A. Henry, J Acoust Soc Am 115, (Apr, 2004). 3. Y. Y. Kong, G. S. Stickney, F. G. Zeng, J Acoust Soc Am 117, (2005). 4. C. von Ilberg et al., ORL J Otorhinolaryngol Relat Spec 61, (Nov-Dec, 1999). 5. R. V. Shannon, F. G. Zeng, V. Kamath, J. Wygonski, M. Ekelid, Science 270, (Oct 13, 1995). 6. M. Nilsson, S. D. Soli, J. A. Sullivan, J Acoust Soc Am 95, (Feb, 1994).ACKNOWLEDGEMENTS We would like to thank Ms. Abby Copeland and Dr. Ed Rubel for commenting on our manuscript. This experiment was supported by NIH Grant 2R01 DC LOWS ARE THE NEW HIGHS: IMPROVING SPEECH INTELLIGIBILITY WITH UNINTELLIGIBLE LOW-FREQUENCY SOUNDS LOWS ARE THE NEW HIGHS: IMPROVING SPEECH INTELLIGIBILITY WITH UNINTELLIGIBLE LOW-FREQUENCY SOUNDS Janice E. Chang 1, John Y. Bai 2, Martin Marsala 2, Helen E. Cullington 2, and Fan-Gang Zeng 2 1Department of Bioengineering, University of California, Berkeley, CA, USA 2Hearing and Speech Research Laboratory, University of California, Irvine, CA, USA A. Original HINT: "A boy fell from the window." B. Competing voice: "A pot of tea helps to pass the evening." F. Combined high-pass and implant simulation. C. Original signal + competing voice (SNR=0 dB) D. High-passed original mixed sound. E. Four-channel implant simulation of the mixed sound below 4000 Hz. B. Competing voice: "A large size in stockings is hard to sell." E. Four-channel implant simulation of the mixed sound above 500 Hz. D. Low-passed original mixed sound. F. Combined low-pass and implant simulation. C. Original signal + competing voice (SNR=0 dB) A. Original HINT: "A boy fell from the window."