The Neuroscience of Language. What is language? What is it for? Rapid efficient communication – (as such, other kinds of communication might be called.

Slides:



Advertisements
Similar presentations
Chapter 12: Speech and Music Perception
Advertisements

Central Auditory Pathways, Functions, and Language Central Auditory Pathways, Functions, and Language.
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Chapter 12 Speech Perception. Animals use sound to communicate in many ways Bird calls Bird calls Whale calls Whale calls Baboons shrieks Baboons shrieks.
Human Communication.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.
Cognition, 8e by Margaret W. MatlinChapter 2 Cognition, 8e Chapter 2 Perceptual Processes I: Visual and Auditory Recognition.
Writing Workshop Here are some typical writing style issues which people have trouble with.
Sensation and Perception - audition.ppt © 2001 Laura Snodgrass, Ph.D.1 Audition Anatomy –outer ear –middle ear –inner ear Ascending auditory pathway –tonotopic.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
CSD 3103 anatomy of speech and hearing mechanisms Hearing mechanisms Fall 2008 Central Pathways.
Functional Anatomy of Spoken Input Note that the low-level auditory pathway is not specialized for speech sounds – Both speech and non-speech sounds activate.
More Writing Workshop Use care with words like “thing” and “where”. For example: – “things like vision” – “visual illusions where colours are distorted”
Exam 1 Monday, Tuesday, Wednesday next week WebCT testing centre Covers everything up to and including hearing (i.e. this lecture)
You have a test next week!
Language. Using Language What is language for? Using Language What is language for? – Rapid, efficient communication To accomplish this goal, what needs.
Read Lamme (2000) TINS article for Wednesday. Visual Pathways V1 is, of course, not the only visual area (it turns out it’s not even always “primary”)
Anatomy of language processing Mark Harju. Most components for language processing are located in the left hemisphere Most components for language processing.
PSY 369: Psycholinguistics
SPEECH PERCEPTION The Speech Stimulus Perceiving Phonemes Top-Down Processing Is Speech Special?
1 Phonetics Study of the sounds of Speech Articulatory Acoustic Experimental.
Structure and function
Auditory-acoustic relations and effects on language inventory Carrie Niziolek [carrien] may 2004.
Brain and Language Where is it?. How do we study language and the brain? Neurolinguistics studies the neurological bases of language  Explores how the.
The Perception of Speech
Frequency Coding And Auditory Space Perception. Three primary dimensions of sensations associated with sounds with periodic waveforms Pitch, loudness.
The Auditory System Sound is created by pressure waves in air; these waves are induced by vibrating membranes such as vocal cords. Because the membranes.
Psycholinguistics.
Audio Scene Analysis and Music Cognitive Elements of Music Listening
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Basics of Functional Magnetic Resonance Imaging. How MRI Works Put a person inside a big magnetic field Transmit radio waves into the person –These "energize"
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Bookheimer 2003 Annual Rev. Neurosci.. Phonology in IFG Gelfand and Bookheimer, Neuron 2002.
 The origin of grammatical rules is ascribed to an innate system in the human brain.  The knowledge of and competence for human language is acquired.
Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University.
The Auditory System. Gross anatomy of the auditory and vestibular systems.
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
Functional Neuroimaging of Speech Perception in Infants Dehaene-Lambertz, G., Dehaene S., and Hertz-Pannier, L. By Divya Patel.
The Holes in the Brian Help Us Sort Out Sounds..  I. The Brain’s ability to sort out sounds  1. speech sounds are categorized.  2.Misinterpretations.
Chapter 4 Sensation What Do Sensory Illusions Demonstrate? Streams of information coming from different senses can interact. Experience can change the.
Chapter 13: Speech Perception. The Acoustic Signal Produced by air that is pushed up from the lungs through the vocal cords and into the vocal tract Vowels.
Fundamentals of Sensation and Perception THE AUDITORY BRAIN AND PERCEIVING AUDITORY SCENE ERIK CHEVRIER OCTOBER 13 TH, 2015.
AUDITORY CORTEX 4 SEPT 21, 2015 – DAY 12 Brain & Language LING NSCI Fall 2015.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Speech Perception.
Articulatory Net I.2 Oct 14, 2015 – DAY 21
WebCT You will find a link to WebCT under the “Current Students” heading on It is your responsibility to know how to work WebCT!
Rhythm and Rate: Perception and Physiology HST November 2007 Jennifer Melcher.
Welcome Back Pick up a Packet! UAA, Cody Augdahl, 2005.
Against formal phonology (Port and Leary).  Generative phonology assumes:  Units (phones) are discrete (not continuous, not variable)  Phonetic space.
Audio Scene Analysis and Music Cognitive Elements of Music Listening Kevin D. Donohue Databeam Professor Electrical and Computer Engineering University.
Chapter 11 Language. Some Questions to Consider How do we understand individual words, and how are words combined to create sentences? How can we understand.
Chapter 2 Cognitive Neuroscience. Some Questions to Consider What is cognitive neuroscience, and why is it necessary? How is information transmitted from.
Fundamentals of Sensation and Perception
Sundeep Teki Wellcome Trust Centre for Neuroimaging University College London, UK Auditory figure-ground segregation in complex acoustic scenes.
PSY2301: Biological Foundations of Behavior The Auditory System Chapter 10.
Dept. of Brain & Cognitive Sciences
Attention, prediction and fMRI (cont.) Adaptation-fMRI
Fundamentals of Sensation and Perception
The Eye Processing in Brain Color
Auditory Neuroscience - Lecture 6 Hearing Speech
Speech Perception.
The Eye Processing in Brain Color
Psychoacoustics: Sound Perception
Volume 30, Issue 3, Pages (May 2001)
Volume 25, Issue 15, Pages (August 2015)
Rajeev D.S. Raizada, Russell A. Poldrack  Neuron 
Learning Letters in Adulthood
Presentation transcript:

The Neuroscience of Language

What is language? What is it for? Rapid efficient communication – (as such, other kinds of communication might be called language for our purposes and might share underlying neural mechanisms) Two broad but interacting domains: – Comprehension – Production

Speech comprehension Is an auditory task – (but stay tuned for the McGurk Effect!) Is also a selective attention task – Auditory scene analysis Is a temporal task – We need a way to represent both frequency (pitch) and time when talking about language -> the speech spectrogram

Speech comprehension Is also a selective attention task – Auditory scene analysis Which streams of sound constiutute speech? Which one stream constitutes the to-be- comprehended speech Not a trivial problem because sound waves combine prior to reaching the ear

Speech comprehension Is a temporal task – Speech is a time-varying signal – It is meaningless to freeze a word in time (like you can do with an image) – We need a way to consider both frequency (pitch) and time when talking about language -> the speech spectrogram

What forms the basis of spoken language? Phonemes Phonemes strung together over time with prosody

What forms the basis of spoken language? Phonemes = smallest perceptual unit of sound Phonemes strung together over time with prosody

What forms the basis of spoken language? Phonemes = smallest perceptual unit of sound Phonemes strung together over time with prosody = the variation of pitch and loudness over the time scale of a whole sentence

What forms the basis of spoken language? Phonemes = smallest perceptual unit of sound Phonemes strung together over time with prosody = the variation of pitch and loudness over the time scale of a whole sentence To visualize these we need slick acoustic analysis software…which I’ve got

What forms the basis of spoken language? The auditory system is inherently tonotopic

Is speech comprehension therefore an image matching problem? If your brain could just match the picture on the basilar membrane with a lexical object in memory, speech would be comprehended

Problems facing the brain Acoustic - Phonetic invariance – says that phonemes should match one and only one pattern in the spectrogram – This is not the case! For example /d/ followed by different vowels:

Problems facing the brain The Segmentation Problem: – The stream of acoustic input is not physically segmented into discrete phonemes, words, phrases, etc. – Silent gaps don’t always indicate (aren’t perceived as) interruptions in speech

Problems facing the brain The Segmentation Problem: – The stream of acoustic input is not physically segmented into discrete phonemes, words, phrases, etc. – Continuous speech stream is sometimes perceived as having gaps

How (where) does the brain solve these problems? – Note that the brain can’t know that incoming sound is speech until it first figures out that it isn’t !? – Signal chain goes from non-specific -> specific – Neuroimaging has to take the same approach to track down speech-specific regions

Functional Anatomy of Speech Comprehension low-level auditory pathway is not specialized for speech sounds Both speech and non-speech sounds activate primary auditory cortex (bilateral Heschl’s Gyrus) on the top of the superior temporal gyrus

Functional Anatomy of Speech Comprehension Which parts of the auditory pathway are specialized for speech? Binder et al. (2000) – fMRI – Presented several kinds of stimuli: white noise pure tones non-words reversed words real words These have non-word-like acoustical properties These have word-like acoustical properties but no lexical associations word-like acoustical properties and lexical associations

Functional Anatomy of Speech Comprehension Relative to “baseline” scanner noise – Widespread auditory cortex activation (bilaterally) for all stimuli – Why isn’t this surprising?

Functional Anatomy of Speech Comprehension Statistical contrasts reveal specialization for speech-like sounds – superior temporal gyrus – Somewhat more prominent on left side

Functional Anatomy of Speech Comprehension Further highly sensitive contrasts to identify specialization for words relative to other speech-like sounds revealed only a few small clusters of voxels Brodmann areas – Area 39 – 20, 21 and 37 – 46 and 10

Next time we’ll discuss Speech production Aphasia Lateralization