Download presentation
Presentation is loading. Please wait.
Published byValentine Kelley Modified over 9 years ago
1
The Holes in the Brian Help Us Sort Out Sounds.
2
I. The Brain’s ability to sort out sounds 1. speech sounds are categorized. 2.Misinterpretations are possible when certain vowel sounds are acoustically similar to each other. A. This article uses the vowel /ee/ vs. /I/ we incorrectly hear the vowel /ee/ (like in beat) instead of/I/(like 1.Speech Sounds are categorized B. If in bit) when the person is actually saying bit, we will interpret the word as beat and misinterpret the speakers message.
3
C. Basically, the speech sounds occupy neighboring regions of acoustic space and when someone utters a non-ideal version of the speech sound, it falls near the boundary between the categories of the acoustic space. D. Categorical perception: We are perceptually more sensitive to between category differences (if the sound falls near the boundary between the /i/ and the /ee/ categories, then with-in category differences (if the sound falls near the center of the /ee/ category – a good /ee/ speech sound).
4
Holes in the Brain Help Us Sort Out Sounds D. Categorical perception: we are perceptually more sensitive to between category differences( if the sound falls near the boundary between the /I/ and the /ee/ categories, than with-in category differences (if the sound falls near the center of the /ee/ category – a good /ee/ speech sound). C. Basically, the speech sounds occupy neighboring regions of acoustic space and when someone utters a non-ideal version of the speech sound, it falls near the boundary between the categories of the acoustic space.
5
II. First experiment 1. Use of functional magnetic resonance imaging (fMRI) A. FMRI is used to measure neural activity in the auditory cortical areas of the brain. B. Human subjects listened to either a good speech sound of /ee/ or a border line speech sound of /ee/. 2. There was less activation in the auditory cortical areas where the subjects heard the good /ee/ speech sound. 3. There more activity in the primary auditory cortex area of the right hemisphere where the subjects heard the borderline /ee/. 4.This results from this experiment shows that our brains dedicate more neural resources to processing uncertain sounds.
6
Second experiment III A. Subject underwent one week of training(before and after)with the sounds so they would learn a new category of sounds. 1.Measuring the brain activity of non-speech sounds. B. There was a decrease in brain activation for sounds from the center of the category,after learning the new sound category. C.the subjects also became worse at discriminating sound from the center of the newly learned category.
7
Computer Simulation of Speech Sounds IV. 1. The author, Frank H. Guenther, created a neural network model of the results. A. It revealed that categorization training leads to a decrease in the number of auditory cortical cells that become active when a central example of a behaviorally relevant sound category is heard. B. Infants exposed to the sounds of their native language produce more cells in their auditory cortex dedicated to sounds that fall between phoneme categories rather than to sounds that fall near the center of phoneme category.
8
C. Once exposed to English speech sound, there are holes with rather few cells in the auditory cortical illustration for the parts of acoustic space matching to the sounds and there are peaks between the sound categories. The computer simulation was for the use of English speech sounds for /r/ and /l/. D. Then the computer model was trained with Japanese speech sounds. 1. Only one valley developed because Japanese only has one phoneme, the /r/, which falls in the same acoustic space as the two English phonemes /r/ and /l/. 2. The model indicates why a native Japanese speaker learning English ) may have difficulty deciphering between the English phonemes /r/ and /l/. E. for the native Japanese subject, the English sounds both fall into the same hole in the subject’s auditory cortical map.
9
Speak to my Right Ear ; Sing to my Left Ear. 1. New research reveal left and right ears process sound differently A. Knowledge that the left and right half of the brain process sound differently has been around for a while. B. Until recently, the differences were believed to stem from cellular properties in each brain hemisphere.
10
2. Dr Yvonne Sininger studied hearing in more than 3000 newborns. A. Tiny amplifies were placed in the outer hair cells of the inner ear B. Cells contracted and expanded to amplified sound vibrations, converted the vibration into neural cells and then sent them to the brain.
11
C. Researchers used tiny probes to send out two different types of sounds. D. Results: speech-like clicks triggered grater amplification in the right ear, while music-like sustained tones were greatly amplified by the left ear. E. “Our findings demonstrate that auditory processing starts in the ear before it is ever seen in seen in the Brian," stated associate Professor Barbara Cone-Wesson.
12
F. Finally, it is now known at birth the ear is structured to differentiate between different types of sounds and launch it to the correct place in the brain.
13
Work Citation of the Studies Related to Processing Sound and the Brian http://abc.net.au/science/news/stories/s1197972.htm. Speak to My Right Ear, Sing to My Left. 2004. http://www.acoustics.org/press/143rd/Guenther.html.Holes in the Brain Help Us Sort Out Sounds. Frank H. Guenther. 2002. http://www.behavioralandbrainfunctions.com. Dishabituation of the BOLD Response to Speech Sounds. Jason D Zevin and Bruce D McCandlis. 2005
14
http://www.sciencedaily.com. Using Spatial Illusion To Learn How The Brain Processes Sound. 1999
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.