3pSC9. Effect of reduced audibility on masking release for normal- and hard-of-hearing listeners Peggy Nelson, Yingjiu Nie, Elizabeth Anderson, Bhagyashree.

Slides:



Advertisements
Similar presentations
Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical.
Advertisements

Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Figures for Chapter 9 Prescription
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Introduction Relative weights can be estimated by fitting a linear model using responses from individual trials: where g is the linking function. Relative.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Pitch organisation in Western tonal music. Pitch in two dimensions Pitch perception in music is often thought of in two dimensions, pitch height and pitch.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Fitting Formulas Estimate amplification requirements of individual patients Maximize intelligibility of speech Provide good overall sound quality Keep.
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners L.M. Litvak, A.J. Spahr, A.A. Saoji,
Cross-Spectral Channel Gap Detection in the Aging CBA Mouse Jason T. Moore, Paul D. Allen, James R. Ison Department of Brain & Cognitive Sciences, University.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Perceptual Weighting Strategies in Normal Hearing and Hearing Impaired Children and Adults Andrea Pittman, Ph.D. Patricia Stelmachowicz, Ph.D. Dawna Lewis,
Department Chemical and FoodInstitute of Technology of Cambodia.
Normalization of the Speech Modulation Spectra for Robust Speech Recognition Xiong Xiao, Eng Siong Chng, and Haizhou Li Wen-Yi Chu Department of Computer.
Figures for Chapter 6 Compression
Vowel formant discrimination in high- fidelity speech by hearing-impaired listeners. Diane Kewley-Port, Chang Liu (also University at Buffalo,) T. Zachary.
Context and the Relationship Between Social Anxiety and Urge to Drink Tracey A. Garcia & Lindsay S. Ham Florida International University Introduction 
Sensitivity System sensitivity is defined as the available input signal level Si for a given (SNR)O Si is called the minimum detectable signal An expression.
Comparisons of Word Recognition Performance in Normal-Hearing Children A Pilot Project by Tiffany Skinner and Stephanie Taylor Spring 1999.
Choking Pressure Ratio Guidelines for Small Critical Flow Venturis
Linical & Experimental Audiology Speech-in-noise screening tests by internet; improving test sensitivity for noise-induced hearing loss Monique Leensen.
Creating sound valuewww.hearingcrc.org Kelley Graydon 1,2,, Gary Rance 1,2, Dani Tomlin 1,2 Richard Dowell 1,2 & Bram Van Dun 1,4. 1 The HEARing Cooperative.
BPS - 3rd Ed. Chapter 211 Inference for Regression.
Effects of noise on hearing and “Noise-induced hearing loss”
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
Copyright © 2014, 2011 Pearson Education, Inc. 1 Chapter 22 Regression Diagnostics.
From Auditory Masking to Supervised Separation: A Tale of Improving Intelligibility of Noisy Speech for Hearing- impaired Listeners DeLiang Wang Perception.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
DEVELOPMENT OF NAL-NL2 Harvey Dillon, Gitte Keidser, Teresa Ching,
Studies of Information Coding in the Auditory Nerve Laurel H. Carney Syracuse University Institute for Sensory Research Departments of Biomedical & Chemical.
Pure Tone Audiometry most commonly used test for evaluating auditory sensitivity delivered primarily through air conduction and bone conduction displayed.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
creating sound value TM Spatial release from masking deficits in hearing-impaired people: Is inadequate audibility the problem? Helen.
Applied Psychoacoustics Lecture 3: Masking Jonas Braasch.
CORRELATION: Correlation analysis Correlation analysis is used to measure the strength of association (linear relationship) between two quantitative variables.
Hearing Research Center
P. N. Kulkarni, P. C. Pandey, and D. S. Jangamashetti / DSP 2009, Santorini, 5-7 July DSP 2009 (Santorini, Greece. 5-7 July 2009), Session: S4P,
IIT Bombay {pcpandey,   Intro. Proc. Schemes Evaluation Results Conclusion Intro. Proc. Schemes Evaluation Results Conclusion.
Functional Listening Evaluations:
Multiple Regression I 1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Multiple Regression Analysis (Part 1) Terry Dielman.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Copyright © 2011 Pearson Education, Inc. Regression Diagnostics Chapter 22.
Predicting the Intelligibility of Cochlear-implant Vocoded Speech from Objective Quality Measure(1) Department of Electrical Engineering, The University.
Figures for Chapter 8 Candidacy Dillon (2001) Hearing Aids.
Processing Faces with Emotional Expressions: Negative Faces Cause Greater Stroop Interference for Young and Older Adults Gabrielle Osborne 1, Deborah Burke.
Date of download: 5/27/2016 Copyright © 2016 American Medical Association. All rights reserved. From: The Importance of High-Frequency Audibility in the.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
BPS - 5th Ed. Chapter 231 Inference for Regression.
Predicting the Intelligibility of Cochlear-implant Vocoded Speech from Objective Quality Measure(2) Department of Electrical Engineering, The University.
김 진 욱 Chap 14. O VERVIEW OF A UDITORY E VOKED P OTENTIALS (3/3)
The Effect of Music Tempo on Memory Matthew Le Rockdale Magnet School for Science and Technology Students commonly listen to music while studying for tests.
Speech Audiometry Lecture 8.
Speech Intelligibility and Sentence Duration as a Function of Mode of Communication in Cochlear Implanted Children Nicole L. Wiessner 1, Kristi A. Buckley.
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
Precedence-based speech segregation in a virtual auditory environment
Self-Adjusted Amplification by Experienced Hearing Aid Users
Prescribing hearing aids and the new NAL-NL2 prescription rule
Consistent and inconsistent interaural cues don't differ for tone detection but do differ for speech recognition Frederick Gallun Kasey Jakien Rachel Ellinger.
Ana Alves-Pinto, Joseph Sollini, Toby Wells, and Christian J. Sumner
The “Flash-Lag” Effect Occurs in Audition and Cross-Modally
CHAPTER 29: Multiple Regression*
Virtual University of Pakistan
Speech Perception (acoustic cues)
15.1 The Role of Statistics in the Research Process
Presentation transcript:

3pSC9. Effect of reduced audibility on masking release for normal- and hard-of-hearing listeners Peggy Nelson, Yingjiu Nie, Elizabeth Anderson, Bhagyashree Katare Department of Speech-Language-Hearing Sciences, University of Minnesota Abstract / Acknowledgements Introduction and Background AI/performance: NH Steady Noise AI/Performance: NH Gated Noise Pilot Results: Aided HI listeners Future Directions Hypothesis Dubno JR, Dirks DD, Schaefer AB. (1989) Stop-consonant recognition for normal-hrng listeners & listeners with high-frequency hearing loss. II: Articulation index predictions. JASA. 85: Jin SH and Nelson PB. (2006) Speech perception in gated noise: the effects of temporal resolution. JASA. 119, Jin SH and Nelson PB (accepted) Interrupted Speech Perception: The effects of hearing sensitivity and frequency resolution. JASA. (accepted for publication, 2010) Nelson, PB et al (2009) Masking release at low sensation levels, poster presented ASA Portland. Pavlovic CV.(1987) Derivation of primary parameters and procedures for use in speech intelligibility predictions. JASA : Rhebergen, K, Versfeld, N, and Dreschler, W. (2006) Extended speech intelligibility index for the prediction of the speech reception threshold in fluctuating noise. JASA 120: Summers, V. and Cord, M (2007) Intelligibility of speech in noise at high presentation levels: effects of hearing loss and frequency region. JASA.122(2): Results and Conclusions Methods Listeners with sensorineural hearing loss show reduced benefit from fluctuating compared to stationary maskers. Our hypothesis is that the representation of the signal in the dips is the most important predictor of performance (Jin and Nelson, 2006). Because the audibility of the signal is the most critical factor, we tested normal-hearing (NH) and hearing- impaired listeners at similar reduced audibility levels. Listeners with normal hearing and hearing loss were presented IEEE sentences at a range of overall levels (from 30 to 80 dB SPL), signal-to-noise ratios, and low-pass filter settings, resulting in a range of signal AIs that varied from 0.1 to For the NH listeners, there was a very systematic relationship between AI and performance, but the slope of the function for the higher level stimuli was shallower than that for the low-level stimuli, suggesting that the active mechanism is at work at low levels both in quiet and in noise. The relationship between AI and performance was less systematic for listeners with hearing loss. (Work supported by NIDCD R01-DC008306) We gratefully acknowledge U of M laboratory members Adam Svec, Tricia Nechodom, Molly Bergsbaken, Bartlomiej Plichta, and Edward Carney for their outstanding assistance. We gratefully acknowledge the assistance of Judy Dubno for assistance with the AI calculations, and Dianne VanTasell for thoughtful consultation. Portions of the work reported here under “Pilot Results” were supported by Starkey, Inc. Jin and Nelson (2006 and accepted) showed that listeners with hearing impairment (HI) showed reduced masking release (MR) even when listening to amplified speech. Figure 1 shows MR for HI listeners from Jin and Nelson. Although performance of HI listeners is similar to NH listeners in quiet and in steady noise, MR for HI listeners is only about half that of NH listeners. Large variability in performance among HI listeners was noted when the noise was interrupted, indicating individual differences. Figure 1. from Jin and Nelson Performance at -10 dB SNR The speech in that 2006 study was amplified based on a half-gain rule. Estimated AI values for the quiet amplified speech ranged from about.4 to.6. This was sufficient for near-100% performance in quiet but was less than the AI of the speech for NH listeners. Previous results (ASA 2009, Portland, re-plotted below) indicated that for NH listeners, the level and low-pass filtering of speech stimuli affects MR. NH Listeners experienced significantly more masking release when speech levels were 52 dBA or higher. Little MR was seen for 32-dB stimuli (approx AI =.5), even though listeners identified nearly 100% of those stimuli in quiet. As speech was LP-filtered more severely, MR also was reduced. In many conditions, there appears to be “room” for MR, but none is seen. At a first approximation, this is similar to what was seen from the better HI listeners in Jin and Nelson, 2006, suggesting that the AI of the quiet stimuli (the stimuli in the dips of the noise) to some extent predicted MR. However, note that steady noise performance is near the floor for these NH listeners, which was not the case in Jin and Nelson Orderly AI by performance functions are seen for IEEE sentences presented at moderate levels. This is the baseline against which data from gated-noise conditions will be compared, as well as data from HI listeners. High-level stimuli (82-92 dBA) resulted in poorer performance than that obtained from moderate levels. This is in agreement with previous findings from Summers and Cord and others. NH performance functions for gated noise followed the same function as that for steady noise. No negative effect of high-level stimuli was seen. This could be at least partially explained if the quiet glimpses of speech drive the performance in gated noise. Variation among NH listeners appears to be less than among HI listeners from Jin & Nelson and from pilot data obtained here. Evidence of the role of the cochlear mechanism is seen in that High-level stimuli produce poorer performance than low-level for the same AI; At high levels when we avoid ceiling effects, performance in steady noise is reduced compared to mid-level performance. This is true for stead noise, but not for gated noise. Faster gating rates may produce poorer performance than slower rates, suggesting a possible role for forward masking. Our results were equivocal (unequal variances and borderline significance levels.) Functions obtained from 9 pilot amplified HI listeners were shifted to the right (requiring higher AIs to obtain equivalent performance. Most amplified HI listeners function in the region of.4 to.6 AI where variability is greatest. Additional data will be needed to determine if performance of HI listeners is significantly poorer than NH across a wider range of AI values. Participants Approximately 45 young adults with normal hearing sensitivity participated in the study. Each participant heard 2 lists each of 35 conditions. For each condition shown in the results here, each data point represents approximately 4 to 8 listeners. In addition, 8 adults ages 45 – 60 years who have mild to moderate sensorineural hearing loss participated in the pilot results shown in Figures 4 and 5. They were a part of a companion study. Their audiograms are shown in the table below. To date, 11 additional adults ages 45 – 40 are enrolled in the current study. They will complete all of the conditions described below for the NH listeners. Stimuli were IEEE sentences spoken by one male and one female talker, a subset of the stimuli that were used in Jin, Stimuli were presented in quiet, steady noise, and gated noise using square-wave gated noise of 4, 8, and 16 Hz. The noise had the same long-term spectrum as the speech. Noise was presented at -10, 0, and +10 dB SNR. (The -10 dB SNR condition was not tested at 92 dB). Filtering Three filter conditions were used: full spectrum, low-pass filtered at 1k Hz (8th order Butterworth) and at 2k Hz. Levels For the NH listeners speech was presented at a wide range of levels from threshold to discomfort, including 32, 52, 72 and 92 dBA Estimates of AI were made based on the previous work of Dubno, Dirks and Schaefer (1987) and Pavlovic (1987). Stimuli were divided into third-octave bands and the audibility of each band was computed. The extended version proposed by Rhebergen et al was used to calculate the AI for the gated noise conditions. Amplification conditions for pilot data: Table of hearing losses for pilot data: References We are continuing to study the change in MR with level and AI of the stimuli for HI listeners under the same conditions as the NH tested here. Our focus will be not so much on masking release per se (difference between performance in gated and steady noise) but directly on listeners’ performance in gated noise. We will quantify the extent to which AI can explain MR across a wide range of levels for HI listeners. Will increasing the AI of HI listeners result in a systematic improvement in performance for both steady and gated noise? Will HI listeners show a significant decrement in performance with higher level stimuli (as in Fig 3)? Unexplained variance in performance of HI listeners may be due to individual differences in cochlear function (spectral resolution, recovery from forward masking.) Will listeners with greater hearing loss deviate most from the NH functions? Additional tests of peripheral function will confirm or refute the role of the active cochlear mechanism in masking release. Results will provide additional knowledge regarding individual variability among HI listeners, for their performance in realistic noise, and the potential for improvement in noise from increasing audibility through amplification. Figures 3a and 3b above show the NH listeners'’ data plotted as percent correct as a function of AI for all stimuli in quiet and steady noise. Only moderate levels (52 – 82 dB A) are shown, left; with high-level stimuli included and dB A shown in purple at right. Figures 5a and 5b above show the HI listeners'’ data plotted as percent correct as a function of AI for all stimuli, for steady noise (left) and gated noise (right.) Pilot data from 9 amplified HI listeners shows a shift of the functions to the right (higher AIs needed for equivalent performance) with large variability. Variability may not be surprising in the mid-range of audibility (See NH Fig 3 above.) Two possible explanations have been proposed for reduced masking release (MR) in HI listeners: 1) overall reduced AI of the speech stimuli (in quiet) and 2) effects of reduced cochlear active mechanism (lack of compression, wider auditory filters, increased forward masking) Neither has been well confirmed to date. To begin testing these hypotheses, we tested a large group of young NH listeners looking for effects of reduced AI and reduced cochlear function. We previously (Portland ASA poster) reported the effects of reduced AI on MR. To a first approximation, reducing AI for NH listeners can reduce masking release to about the same performance of the better HI listeners from Jin & Nelson 2006 Expanding on that, we tested more NH listeners across a wider range of AIs We fit their performance by AI to functions proposed by Pavlovic, for the steady noise conditions. We next looked at AI by performance for gated noise, using Rhebergen's method for calculating AI for gated noise. We hypothesize that the role of the cochlear active mechanism might be more apparent in gated noise than in steady noise. For example, the cochlea must recover from prior stimulation to listen in the dips of the noise (abnormal forward masking) In addition, reduced spectral resolution would reduce the information available in the dips, to integrate audible segments of speech (Jin and Nelson, accepted for publication). For the mid-level stimuli, an orderly fit to the data is seen, but with a wide range of variability in performance near.4 AI. We hypothesize that we see some signs of the role of the cochlear active mechanism overall. Performance for high levels was significantly different from low levels. Results of analysis of variance showed significantly poorer performance for the dB stimuli compared to the dB stimuli. (Report statistical test.) This is consistent with results of Summers and Cord (2007) and others who have shown decreased performance at high levels for NH listeners. Performance of NH listeners in gated noise follows the same function as that of steady noise. Surprisingly, there appears to be no decrement in performance for high-level stimuli, in contrast to Figure 3b. (Anova showed no effect of level.) We have been investigating indications of possible effects of forward masking, which would be seen if faster gate rates produced poorer performance than slower rates. (Short gaps may be perceptually filled.) Statistical analysis is inconclusive to date, indicating only limited evidence of forward masking on MR. Figure 4 above shows the NH listeners‘ data plotted as percent correct as a function of AI for all stimuli, for gated noise, following the Rhebergen extended SII. In contrast to Fig 3b, data from moderate levels (red) and high levels (green) are completely overlapping. NH Gated noise dBA NH Steady noise dBA NH Steady noise dBA HI amplified Gated noise dBA HI amplified Steady noise dBA