Effects of Mispronunciation on Spoken Word Recognition in

Slides:



Advertisements
Similar presentations
Accessing spoken words: the importance of word onsets
Advertisements

Marslen-Wilson Big Question: “What processes take place during the period that the sensory information is accumulating for the listener” during spoken.
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Effects of Competence, Exposure, and Linguistic Backgrounds on Accurate Production of English Pure Vowels by Native Japanese and Mandarin Speakers Malcolm.
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
Language Comprehension Speech Perception Semantic Processing & Naming Deficits.
REFERENCES Dunton, J., Bruce, C., Newton, C. (2011). Investigating the impact of unfamiliar speaker accent on auditory comprehension in adults with aphasia.
Using prosody to avoid ambiguity: Effects of speaker awareness and referential context Snedeker and Trueswell (2003) Psych 526 Eun-Kyung Lee.
Speech perception 2 Perceptual organization of speech.
Auditory Word Recognition
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Phonetic Similarity Effects in Masked Priming Marja-Liisa Mailend 1, Edwin Maas 1, & Kenneth I. Forster 2 1 Department of Speech, Language, and Hearing.
Pre-operative evaluation and post-operative rehabilitation for paediatric cochlear implantation Han Demin, M.D., Ph.D. Beijing Institute of Otolaryngology.
Profile of Phoneme Auditory Perception Ability in Children with Hearing Impairment and Phonological Disorders By Manal Mohamed El-Banna (MD) Unit of Phoniatrics,
PSY 369: Psycholinguistics
Influence of Word Class Proportion on Cerebral Asymmetries for High and Low Imagery Words Christine Chiarello 1, Connie Shears 2, Stella Liu 3, and Natalie.
Language Comprehension Speech Perception Naming Deficits.
Wilson, “The case for sensorimotor coding in working memory” Wilson’s thesis: Items held in short-term verbal memory are encoded in an “articulatory” format.
NeuroPerKog: development of phonematic hearing & working memory in infants & children Włodzisław Duch & many good brains from: 1. Nicolaus Copernicus University,
Preschool-Age Sound- Shape Correspondences to the Bouba-Kiki Effect Karlee Jones, B.S. Ed. & Matthew Carter, Ph.D. Valdosta State University.
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
Background Infants and toddlers have detailed representations for their known vocabulary items Consonants (e.g., Swingley & Aslin, 2000; Fennel & Werker,
Introduction How do people recognize objects presented in pictorial form? The ERP technique has been shown to be extremely useful in studies where the.
Methods Inhibition of Return was used as a marker of attention capture.  After attention goes to a location it is inhibited from returning later. Results.
Acoustic Aspects of Place Contrasts in Children with Cochlear Implants Kelly Wagner, M.S., & Peter Flipsen Jr., Ph.D. Idaho State University INTRODUCTION.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Speech Perception 4/4/00.
1. Background Evidence of phonetic perception during the first year of life: from language-universal listeners to native listeners: Consonants and vowels:
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
Task Design EX: Participants were required to fixate on a plus sign in the middle of a screen. Sentences missing the last word were presented auditorily.
The New Normal: Goodness Judgments of Non-Invariant Speech Julia Drouin, Speech, Language and Hearing Sciences & Psychology, Dr.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Examining Constraints on Speech Growth in Children with Cochlear Implants J. Bruce Tomblin The University of Iowa.
Staffan Hygge Noise, memory and learning (Buller, minne och inlärning) Staffan Hygge Environmental Psychology Department of Building, Energy and Environmental.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Phonological Priming and Lexical Access in Spoken Word Recognition Christine P. Malone Minnesota State University Moorhead.
Basic Cognitive Processes - 2
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
LOGO Change blindness in the absence of a visual disruption Professor: Liu Student: Ruby.
 Example: seeing a bird that is singing in a tree or miss a road sign in plain sight  Cell phone use while driving reduces attention and memory for.
1 The Identification of Tone in Chinese Hearing-Impaired and Hearing-Normal Children Jing-Ni Ou Graduate Institute of Linguistics National Taiwan University.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Introduction Method Experiment 2 In spoken word recognition, phonological and indexical properties (i.e., characteristics of the speaker’s voice) of a.
LOGO Visual Attention in Driving: The Effects of Cognitive Load and Visual Disruption Professor: Liu Student: Ruby.
Melanie Boysen & Gwendolyn Walton
Semantic Priming Effects in a Bilingual Gujarati Speaker
Auditory Localization in Rooms: Acoustic Analysis and Behavior
Precedence-based speech segregation in a virtual auditory environment
Emilie Zamarripa & Joseph Latimer| Faculty Mentor: Jarrod Hines
Copyright © American Speech-Language-Hearing Association
S. Kramer1, K. Tucker1, A.L. Moro1, E. Service1, J.F. Connolly1
Phonological Priming and Lexical Access in Spoken Word Recognition
Cognitive Processes PSY 334
From: Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated.
Journal of Vision. 2007;7(11):9. doi: / Figure Legend:
The lexical/phonetic interface: Evidence for gradient effects of within-category VOT on lexical access Bob McMurray Richard N. Aslin Mickey K. TanenMouse.
A Classical Model of Decision Making: The Drift Diffusion Model of Choice Between Two Alternatives At each time step a small sample of noisy information.
Volume 87, Issue 4, Pages (August 2015)
Backward Masking and Unmasking Across Saccadic Eye Movements
Volume 30, Issue 3, Pages (May 2001)
RESULTS: Individual Data
Phonological Priming and Lexical Access in Spoken Word Recognition
Alteration of Visual Perception prior to Microsaccades
Volume 24, Issue 13, Pages (July 2014)
Karl R Gegenfurtner, Jochem Rieger  Current Biology 
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Presentation transcript:

Effects of Mispronunciation on Spoken Word Recognition in Cochlear Implant Users & Typically Hearing Listeners T. Ellis 1, K. Apfelbaum 2, H. L. Rigler 1, M. Seedorff 3, B. McMurray 1,4 1 Department of Psychological and Brain Sciences, University of Iowa 2 Department of Psychology, Ohio State University 3 Department of Biostatistics, University of Iowa 4 Delta Center Background Methods Effect of Mispronunciation Participants: 53 post-lingually deafened CI users (mean age 52.2y) 36 typically hearing age-matched controls (mean age 52.1y) Subjects excluded if accuracy fell below 90% on correctly pronounced trials (N=3) Auditory stimuli: Presented over loudspeakers in sound-attenuated booth 40 CVC English words 4 mispronounced tokens of each word 2 degrees of mispronunciation (single feature or multi-feature) 2 places of mispronunciation (onset or offset) 320 randomized trials Procedure: 4 AFC, visual world paradigm On screen: Target image + 3 phonetically unrelated images Task: Hear a token, click on visual referent Measure fixations to the target for trials on which correct referent is selected Statistical Approach: Looks to target as a function of time for correct trials only. Bootstrapped Difference of Time Series (bdots): Estimates time window over which two time series are different (with appropriate correction for multiple comparisons). (Olseon et al., in press). How much does mispronunciation impede word recognition? Mispronunciation effect: difference in target fixations for correct and incorrect pronunciations. Speech information arrives sequentially. At early points in time, signal is temporarily ambiguous. Listeners immediately begin activating many lexical candidates. Visual World Paradigm (VWP) Speech is highly variable within and between talkers (e.g., rate, prosody, coarticulation, production errors). Mispronunciations influence the lexical competition dynamic, and are a useful experimental tool for measuring how people respond to variable input. bakery Hear: “ba… basic barrier barricade bait baby Onset mispronunciation ----- CI ----- CI Single feature Multi-feature TH Mispronunciation effect 200 ms 1 2 3 4 5 Trials Time % fixations Target Cohort Rhyme Unrelated p = 0.0018 p = 0.0021 Time (msec) Time (msec) Onset MP Offset MP Target Single feature Multi-feature beach deach heach beaj beag fish sish nish fiss fid jet chet thet jek jech Offset mispronunciation Mispronunciation effect p = 0.0008 ----- CI Mispron. Type Time windows detected (msec) Single feature onset 432-896; > 1136 Multi-feature onset 452-976; > 1496 Single feature offset > 1804 Multi-feature offset Not analyzed due to poor fit of data Single feature TH Swingley (2009) studied adults’ and children’s sensitivity to consonant mispronunciations in VWP. Target fixations time-locked to location of mispronunciation Time (msec) Early in processing Onset: CI users show less interference from mispronunciation. Offset: CI users show similar interference effects. Late in processing Onset: CI users show more interference. Offset: CI users show less interference Discussion & Future Directions People who use cochlear implants: Can perceive speech accurately despite degraded input. Develop compensatory listening strategies. Altered weighting of acoustic cues (Moberly et al., 2014). Modified time course of speech processing (Farris-Trimble et al,. 2014) Adult CI users are slower to fixate the target; fewer overall looks. CI users sustain looks to competitors longer than typically hearing listeners, suggesting a “wait and hedge bets” compensatory strategy. Results CI users are slower than TH listeners to activate correct target. Both groups show similar time course of processing for onset and offset MP  CI users show similar incremental processing to TH listeners. CI users’ processing delay allows more information to accumulate before making strong commitments: more flexible strategy for dealing with uncertain input.  They show less effect of mispronunciation and can recover better in some circumstances. Correctly Pronounced Condition CI ---- TH (msec) Proportion fixations Target Cohort Effect of CI use: Target fixations are delayed and reduced in CI users relative to TH listeners across conditions Significant group differences between 252 and 3000 msec (yellow interval). adult CI adult control Future analyses: within-group differences in time course of processing. Preliminary results suggest unexpected advantage for CI users without acoustic hearing, especially later during processing. Time p = 0.001 Question References Target fixations by location of mispronunciation Do CI users’ compensatory listening strategies simultaneously help overcome surface variation (mispronunciations) in the speech signal? Farris-Trimble, A., McMurray, B., Cigrand, N., & Tomblin, J. B. (2014). The process of spoken word recognition in the face of signal degradation. Journal of Experimental Psychology. Human Perception and Performance, 40(1), 308–27. doi:10.1037/a0034353 Moberly, A. C., Lowenstein, J. H., Tarr, E., Caldwell-Tarr, A., Welling, D. B., Shahin, A. J., & Nittrouer, S. (2014). Do adults with cochlear implants rely on different acoustic cues for phoneme perception than adults with normal hearing? Journal of Speech, Language, and Hearing Research, 57(3), 566–582. doi:10.1044/2014 Oleson, J. J., Cavanaugh, J. E., McMurray, B., & Brown, G. (in press). Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm. Statistical Methods in Medical Research Swingley, D. (2009). Onsets and codas in 1.5-year-olds’ word recognition. Journal of Memory and Language, 60, 252–269. doi:10.1016/j.jml.2008.11.003 Proportion of Fixations E-mail: tyler-p-ellis@uiowa.edu 2015 ASHA Convention