REFERENCES Dunton, J., Bruce, C., Newton, C. (2011). Investigating the impact of unfamiliar speaker accent on auditory comprehension in adults with aphasia.

Slides:



Advertisements
Similar presentations
09/01/10 Kuhl et al. (1992) Presentation Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., & Lindblom, B. (1992) Linguistic experience alters.
Advertisements

Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
The perception of dialect Julia Fischer-Weppler HS Speaker Characteristics Venice International University
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
Examining the Relationship Between Confrontational Naming Tasks & Discourse Production in Aphasia Leila D. Luna & Gerasimos Fergadiotis Portland State.
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
1 FON 218: Neurolinguistics APHASIA APHASIA Wanda Jakobsen Wanda Jakobsen.
Aphasia A disorder caused by damage to the parts of the brain that control language. It can make it hard to read, or write and to comprehend or produce.
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
Introduction Semantic Feature Analysis (SFA) is a treatment technique designed to improve the naming abilities by increasing the level of activation within.
Method Participants Fifty-six undergraduate students (age range 19-37), 14 in each of the four language groups (monolingual, Spanish-English bilingual,
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
Regional Trainings, Fall 2003
What is the role of the left hemisphere in recovery of language in patients with aphasia? Swathi Kiran, Peter Glynn, Chaleece Sandberg, Israr Ul-Haq, Jennifer.
How General is Lexically-Driven Perceptual Learning of Phonetic Identity? Tanya Kraljic and Arthur G. Samuel Our Questions (e.g., learning a particular.
Psycholinguistic methodology Psycholinguistics: Questions and methods.
TEMPLATE DESIGN © Listener’s variation in phoneme category boundary as a source of sound change: a case of /u/-fronting.
PSY 369: Psycholinguistics
Brain Lateralization Left Brain vs Right Brain. Corpus callosum Bridge between left and right hemispheres of the brain.
Aphasia and Language-Related Agnosia and Apraxia
Language Comprehension Speech Perception Naming Deficits.
The syntax of language How do we form sentences? Processing syntax. Language and the brain.

Fractionation of Memory in Medial Temporal Lobe Amnesia
Different evaluations for different kinds of hearing Matthew B. Winn Au.D., Ph.D. Waisman Center, UW-Madison Dept. of Surgery.
Despite adjustments to the Wernicke-Lichtheim model, there remained disorders which could not be explained. Later models (e.g., Heilman’s) have included.
1 Language disorders We can learn a lot by looking at system failure –Which parts are connected to which Examine the relation between listening/speaking.
Level 1 and Level 2 Auditory Perspective-taking in 3- and 4- Year -Olds Abstract Presented at the Psychology Undergraduate Research Conference, Atlanta,
Speech and Language Test Language.
Preschool-Age Sound- Shape Correspondences to the Bouba-Kiki Effect Karlee Jones, B.S. Ed. & Matthew Carter, Ph.D. Valdosta State University.
McEnery, T., Xiao, R. and Y.Tono Corpus-based language studies. Routledge. Unit A 2. Representativeness, balance and sampling (pp13-21)
Amira Al Harbi.  Psycholinguistics is concerned with language and the brain.  To be a perfect psycholinguistist, you would need to have a comprehensive.
University of Split Danica Škara, PhD Office hours: Tuesday, 14:00-15:00h PSYCHOLINGUISTICS AND COGNITIVE ASPECTS.
Educational level relative to gender (Ν=28)
Participants and Procedure  Twenty-five older adults aged 62 to 83 (M = 70.86, SD = 5.89).  Recruited from St. John’s and surrounding areas  56% female.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
CSD 2230 HUMAN COMMUNICATION DISORDERS Topic 6 Language Disorders Adult Disorders Aphasia and Right Hemisphere Injury.
Speech Perception 4/4/00.
Frank E. Musiek, Ph.D., Jennifer Shinn, M.S., and Christine Hare, M. A.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Localization of Function in the Brain IB Psychology Levels of Analysis.
Does Phonological Awareness Intervention Impact Speech Production in a 3-year-old? Kayla Knueppel, Department of Communication Sciences and Disorders Vicki.
The Influence of Feature Type, Feature Structure and Psycholinguistic Parameters on the Naming Performance of Semantic Dementia and Alzheimer’s Patients.
Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007.
Objective The current study examined whether the timing of recovery from late onset of productive vocabulary (e.g., either earlier or later blooming) was.
4.2.6The effects of an additional eight years of English learning experience * An additional eight years of English learning experience are not effective.
Paradoxical False Memory for Objects After Brain Damage Stephanie M. McTighe 1,2 ; Rosemary A. Cowell 3, Boyer D. Winters 4, Timothy J. Bussey 1,2 and.
The New Normal: Goodness Judgments of Non-Invariant Speech Julia Drouin, Speech, Language and Hearing Sciences & Psychology, Dr.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Acknowledgments We thank Dr. Yu, Dr. Bateman, and Professor Szabo for allowing us to conduct this study during their class time. We especially thank the.
Chapter 13: Speech Perception. The Acoustic Signal Produced by air that is pushed up from the lungs through the vocal cords and into the vocal tract Vowels.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Bosch & Sebastián-Gallés Simultaneous Bilingualism and the Perception of a Language-Specific Vowel Contrast in the First Year of Life.
Tonal Violations Interact with Lexical Processing: Evidence from Cross-modal Priming Meagan E. Curtis 1 and Jamshed J. Bharucha 2 1 Dept. of Psych. & Brain.
Language and the brain Introduction to Linguistics.
Chapter 11 Language. Some Questions to Consider How do we understand individual words, and how are words combined to create sentences? How can we understand.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Effects of Reading on Word Learning
Bi-dialectalism: the investigation of the cognitive advantage and non-native dialect perception in noise Brittany Moore, Jackie Rayyan, & Lynn Gilbertson,
Language: An Overview Language is a brain function
Picture Sequence Description
6th International Conference on Language Variation in Europe
Visual Memory is Superior to Auditory Memory
Rm session.
SENSATION AND PERCEPTION
SENSATION AND PERCEPTION
An Introduction to Speechreading
SENSATION AND PERCEPTION
Language.
Presentation transcript:

REFERENCES Dunton, J., Bruce, C., Newton, C. (2011). Investigating the impact of unfamiliar speaker accent on auditory comprehension in adults with aphasia. International Journal of Language & Communication Disorders, 46 (1), Jesse, A, & McQueen, J. “Positional effects in the lexical retuning of speech perception.” Psychon Bull Rev (2011) 1-8. Print. DOI: /s Kraljic, T, & Samuel, A.G. (2007). Perceptual adjustments to multiple speakers. Journal of Memory and Language, 56, Kraljic, T & Samuel, A.G. (2005). Perceptual learning for speech; Is there a return to normal? Cognitive Psychology, 51, Myers, E, & L. Mesite. (2012). Neural systems underlying lexically-biased perceptual learning in speech.” Paper presented at the Neurobiology of Language Meeting, San Sebastian, Spain. EXPECTED RESULTS METHODS Screening: We expect to run ~32 participants in the study (16 persons with aphasia, 16 unimpaired controls.) Participants will be monolingual English speakers, between the ages of 18-75, with no known hearing impairments. 1.Hearing Screening- all participants (experimental and controls) will go through a standard hearing screening to ensure they are able to hear the stimuli in the experiment. 2.Western Aphasia Battery Test- only the participants with aphasia will go through an Aphasia Test to test the language function in these adults and determine the type of aphasia and level of severity, so we are able to use that information moving forward and examining their results. Tasks: 1.Lexical Decision Task- Participants hear tokens of 50 words (ex. bullying, document, parakeet) and 50 non- words (ex. klogodar, ryligal, wonimtic) in random order and are asked to decide if what they hear is a word or a non-word. 20 of these tokens are considered “critical words” and contain an ambiguous sound (50% /s/, 50% /sh/) word medially. In this training we expect listeners to retune to their talker and adjust previously held phoneme boundaries to accommodate their speaker. 1.Phoneme Identification Task-Participants hear a continuum of tokens in the same voice from training, ranging from 30% /s/, 70% /sh/ to 80% /s/, 20% /sh/. Each token is inserted into “a?i” and is played randomly 10 times in the set. Participants are asked to decide if what they hear sounds more like “asi” or “ashi.” We expect participants to classify more ambiguous tokens based on the type of training they had (Myers, Mesite, 2012). INTRODUCTION In a dynamic world where relating to others is an important aspect of life, being able to speak and understand people is essential for effective communication. However, it is often the case that we have variation in our speech signal. Examples of Speech Signal Variations:  Boston accent ‘r’ dropping ( “car”  “cah”)  Southern accent vowel breaking (“cat”  “cayut”)  Speaker with a lisp (“sing”  “thing”)  Speaking rate (people tend to talk faster on the East Coast) With all of this variation in the speech signal we encounter how is it that we are able to keep a stable perception of incoming speech in order to understand our talker? Kraljic & Samuel (2005) suggested two techniques of how this occurs. Two Theories of Learning New Talkers: 1. One idea is that a listener holds multiple representations of every speaker they encounter. 2. The other idea suggests a change in phonemic boundaries after exposure to an ambiguous phoneme. Previous studies have shown that listeners who are exposed to a new talker adjust their speech sound categories to correspond with the new speaker (Kraljic & Samuel, 2005). While this is easily observed in a non-disordered population, it is unknown whether the same process occurs in a disordered population, like persons with aphasia. What is Aphasia? Aphasia is a loss of language that occurs usually after a left hemisphere brain damage. People who have aphasia can show a variety of symptoms and different rates of recovery depending on the type of aphasia and the severity. Dunton et. al (2011) conducted an experiment where participants with aphasia heard familiar and unfamiliar accented speech and tested sentence comprehension. They found that the individuals with aphasia were lower in accuracy in both unfamiliar and familiar accents than the non-disordered group. This difficulty in retuning in persons with aphasia is examined in the present study. College of Liberal Arts and Sciences Speech and Language Processing Differences Between Impaired and Unimpaired Populations Julia R. Drouin, SURF Award Recipient, Summer 2013, Dr. Emily Myers, PhD, Department of Speech, Language, and Hearing Sciences Dr. Rachel Theodore, PhD, Department of Speech, Language, and Hearing Sciences /s/ group/sh/ group Critical Word Tokens Era?er Coli?eum Epi?ode Ambi?ion Flouri?ing Publi?er Speaker 1 Speaker 2 Speaker 3 Speaker 2 Speaker 3 Speech Sound Category before new talker Speech Sound Category after new talker DISCUSSION Differences between populations: We expect the unimpaired group to adjust their speech sound categories to accommodate their speaker. Figure 1 shows what we typically expect to see in an unimpaired population,: participants will be more likely to categorize ambiguous sounds based on the type of training received. Figure 2 shows hypothesized data in the impaired population; participants in both training groups will not be more likely to define an ambiguous sound as ‘asi’ or ‘ashi’ just based on training. Differences within population: We also expect to differences within our population of language impaired individuals. Persons with aphasia have deficits in language production and comprehension, however they may show more difficulty in one area.  Fluent Aphasia- This group typically has more difficulty with comprehension than production: we expect to see difficulty adapting to a variable speech signal and anticipate results similar to Figure 2. This would predict that fluent aphasics might have particular difficult adapting to non-standard speech or accents.  Non-fluent Aphasia- This group has relatively intact comprehension. If we see results similar to Figure 2 (impaired) this may suggest that persons who experience brain damage may have permanent effects in speech and language processing. However, given that this group has good comprehension, we may see results similar to Figure 1 (unimpaired), suggesting plasticity in the brain and it’s aptitude to reorganize in order to persevere speech and language processing abilities. “rehearsal” “rehearshal” Talker 1Talker 2 Group 1 Female /s/Male /sh/ Group 2 Female /sh/Male /s/ “rehear?al” 30% /s/40% /s/50% /s/60% /s/70% /s/80% /s/ “ashi” “asi” Figure 1 Figure 2