Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University.

Slides:



Advertisements
Similar presentations
Figure Three-dimensional reconstruction of the left hemisphere of the human brain showing increased activity in ventrolateral area 45 during verbal.
Advertisements

Chapter 49 Language and Communication Copyright © 2014 Elsevier Inc. All rights reserved.
Chapter 12: Speech and Music Perception
Auditory scene analysis 2
Central Auditory Pathways, Functions, and Language Central Auditory Pathways, Functions, and Language.
Lexical Ambiguity in Sentence Comprehension By R. A. Mason & M. A. Just Brain Research 1146 (2007) Presented by Tatiana Luchkina.
Chapter 12 Speech Perception. Animals use sound to communicate in many ways Bird calls Bird calls Whale calls Whale calls Baboons shrieks Baboons shrieks.
Human Communication.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Ch13. Biological Foundations of Language
Speech and Language. It is the highest function of the nervous system Involves understanding of spoken & printed words It is the ability to express ideas.
The Neuroscience of Language. What is language? What is it for? Rapid efficient communication – (as such, other kinds of communication might be called.
A.Diederich– International University Bremen – USC – MMM – Spring Auditory scene.
Copyright © 2006 Pearson Education, Inc., publishing as Benjamin Cummings Central Nervous System (CNS)  CNS = Brain + spinal cord  Surface anatomy includes.
 The cerebrum or cortex is the largest part of the human brain, associated with higher brain function such as thought and action. The cerebrum controls.
CNBH, PDN, University of Cambridge Roy Patterson Centre for the Neural Basis of Hearing Department of Physiology, Development and Neuroscience University.
Music Perception. Why music perception? 1. Found in all cultures - listening to music is a universal activity. 2. Interesting from a developmental point.
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
Searching for the NCC We can measure all sorts of neural correlates of these processes…so we can see the neural correlates of consciousness right? So what’s.
Anatomy of language processing Mark Harju. Most components for language processing are located in the left hemisphere Most components for language processing.
The Neuroscience of Music. Main points Music is like language –Characterized by rhythmic sequential sounds –Has syntax: “rules” by which a sequence of.
Chapter Nine The Linguistic Approach: Language and Cognitive Science.
Notes: Exam corrections – due on Thursday, November 12 Last Exam Concrete vs Abstract words.
Teaching Pronunciation
Lateralization & The Split Brain and Cortical Localization of Language.
The Human Brain How does it work?. Gray Area – Frontal lobe White Area – Parietal lobe Red Area – Occipital lobe Green Area – Temporal lobe.
Thursday, January 12, 2012 Objective : Review for quiz BR:After a car accident, Gina suffered brain damage and her vision was impaired. What brain lobe.
Recent Findings in the Neurobiology & Neuropsychology of Reading Processes A. Maerlender, Ph.D. Clinical School Services & Learning Disorders.
Impact of language proficiency and orthographic transparency on bilingual word reading: An fMRI investigation by Gayane Meschyan and Arturo E. Hernandez.
COMBINATION TONES The Science of Sound Chapter 8 MUSICAL ACOUSTICS.
Audio Scene Analysis and Music Cognitive Elements of Music Listening
By Peter Dang and Kyle Dennison.  Ability to acquire language is a unique and essential human trait  Chomsky’s Universal Grammar The human brain contains.
"One brain, two languages-- educating our bilingual students in the light of Neuroscience“ Dr. Luz Mary Rincon.
 The origin of grammatical rules is ascribed to an innate system in the human brain.  The knowledge of and competence for human language is acquired.
Using PET. We ’ ve seen how PET measures brain activity We ’ ve seen how PET measures brain activity How can we use it to measure the “ mind ” that works.
Psychology of Music Learning Miksza Music and Brain Research …from Peretz & Zatorre (2005)
Chapter 12: Auditory Localization and Organization
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
A year 1 musicianA year 2 musicianA year 3 musician I can use my voice to speak, sing and chant. I can use instruments to perform. I can clap short rhythmic.
Functional Neuroimaging of Speech Perception in Infants Dehaene-Lambertz, G., Dehaene S., and Hertz-Pannier, L. By Divya Patel.
Basic Pattern of the Central Nervous System Spinal Cord – ______________________________ surrounded by a _ – Gray matter is surrounded by _ myelinated.
Language By Angela Moss Tanisha Flowers Reginald Alexander.
The Holes in the Brian Help Us Sort Out Sounds..  I. The Brain’s ability to sort out sounds  1. speech sounds are categorized.  2.Misinterpretations.
The Elements of Music. Voices The universal instrument—Our VOICES! Because of the use of this instrument and others, music is the universal language!
When the Brain is attending a cocktail party When the Brain is attending a cocktail party Rossitza Draganova.
DO NOW: 1.State whether you agree or disagree with this statement-and tell me WHY- “Everyone learns the same way.” Be prepared to justify your answer.
Human Anatomy & Physiology FIFTH EDITION Elaine N. Marieb PowerPoint ® Lecture Slide Presentation by Vince Austin Copyright © 2003 Pearson Education, Inc.
Acoustic Illusions & Sound Segregation Reading Assignments for Quizette 3 Chapters 10 & 11.
Articulatory Net I.2 Oct 14, 2015 – DAY 21
Neural correlates of morphological decomposition in a morphologically rich language : An fMRI study Lehtonen, M., Vorobyev, V.A., Hugdahl, K., Tuokkola.
Orienting Attention to Semantic Categories T Cristescu, JT Devlin, AC Nobre Dept. Experimental Psychology and FMRIB Centre, University of Oxford, Oxford,
Rhythm and Rate: Perception and Physiology HST November 2007 Jennifer Melcher.
Language. The system of spoken or written communication used by a particular country, people, community, etc., typically consisting of words used within.
Speech and Language. It is the highest function of the nervous system Involves understanding of spoken & printed words It is the ability to express ideas.
4th grade music - Marking Period 1 During Marking Period 1, students practice identifying specific instruments by sight and sound. They learn about tone.
Welcome Back Pick up a Packet! UAA, Cody Augdahl, 2005.
Audio Scene Analysis and Music Cognitive Elements of Music Listening Kevin D. Donohue Databeam Professor Electrical and Computer Engineering University.
Speech in the DHH Classroom A new perspective. Speech in the DHH Bilingual Classroom Important to look beyond the traditional view of speech Think of.
Auditory Perception 1 Streaming 400 vs. 504 Hz 400 vs. 566 Hz 400 vs. 635 Hz 400 vs. 713 Hz A 400-Hz tone (tone A) is alternated with a tone of a higher.
By: Angela D. Friederici Presented By: Karol Krzywon.
Zatorre paper Presented by MaryKate Chester
Language: An Overview Language is a brain function
Is There a Preference for Low-Level Linguistic Processing in Autism?
By Kendall Mahoney and Aaron Martin
Auditory Neuroscience - Lecture 6 Hearing Speech
Mary T. Castanuela Region 15 ESC
AUDITORY PROCESSING DISORDER APD or CAPD TYPES
Sound, language, thought and sense integration
Conserved Sequence Processing in Primate Frontal Cortex
César F. Lima, Saloni Krishnan, Sophie K. Scott 
Presentation transcript:

Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University of Cambridge **Medical Research Council Institute of Hearing Research, Nottingham The material in this document is Copyright and is to be used only for educational purposes

people often understand remarkably of sung text and many go to listen to the music without caring about the words but there are times when you want to hear the words, and people whose hearing is impaired, or who are not native speakers of the language, may feel especially disadvantaged What have performers learned so far?

To understand a single voice, listeners must correctly group together the sounds that come from a single source (singer) To understand polyphonic texts, listeners must distinguish each individual stream that comprise the set of ‘competing’ voices Rhythm and relative pitch are important in this 3 Auditory streaming

Adapted from Bob Carlyon's website ( Auditory streaming: one sound source or two? This is a demonstration of one factor influencing whether the brain processes different pitches as coming from one place/source or two. Click on the upper loudspeaker icon. Most people hear the two pitches as coming from a single sound source. Click on the lower loudspeaker icon. By the time the clip finishes, most people hear the two pitches as coming from two different sound sources.

% correct Number of distractor voices Unison increases intelligibility even in the absence of distractor voices The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices When there are no distractors, intelligibility is good (90% or better). Increasing the number of distractor voices decreases intelligibility.

% correct Number of distracter voices Unison increases intelligibility even in the absence of distracter voices To ensure or maintain intelligibility, have AT LEAST as many target voices singing in unison as there are distracters The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices

% correct Number of distracter voices Number of distracter voices The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices Native English speakers The same patterns are seen for native and non-native speakers -- but non-native speakers tend to be more disadvantaged in harder conditions including for Non-native English speakers

The biggest difference between language groups occurs with ONE target voice Non-native speakers have disproportionate trouble when there are more distracter voices than target voices and when there is only one target and one distracter Language proficiency What have auditory scientists learned so far? Compared with native speakers:- % correct % correct native non-native T number of target voices D number of distracters 1 T 2 D 2 T 3 D 2 T 2 D 3 T 3 D 1 T 0 D 1 T 1 D

Is the extra intelligibility of unison simply the result of the particular skill of experienced singers? Is unison more intelligible because it is louder? Or that the sound comes from more spatial locations? intelligibility of combinations of male vs female voices (vs of individuals) physical location relative to one another in a room particular types of words and music (lab experiments) Still to be explored

How does the brain process auditory streams and speech in noisy environments?

scans/brunswick/brunswick04.php How does the brain process auditory streams and speech in noisy environments? Brain schematic with the 4 cortical lobes; as there are no imaging studies on intelligibility of sung speech, we will use results studies using from spoken speech

scans/brunswick/brunswick04.php Superior Temporal Gyrus Middle T G Inferior T G How does the brain process auditory streams and speech in noisy environments? This slide shows a flipped version of the same brain. The arrows point to the two lobes that are most intimately involved in sound processing: the temporal lobe and the frontal lobe. The area marked in red is the primary auditory cortex; this is the first (earliest) area that processes sound after it reaches the cortex. It responds to ALL sounds and reacts to any acoustic differences between them

Peelle JE, Johnsrude IS & Davis MH (2010). Hierarchical processing for speech in human auditory cortex and beyond. Frontiers in human neuroscience, 4. Word comprehension Sentence comprehension Semantic representations? Action/production? Reconstruct speech If the task is speech comprehension, then other areas in addition to primary auditory cortex are involved; these areas provide a consistent neural response to speech (words, phonemes) regardless of their acoustic differences (who says them and what the acoustic environment is). These include large portions of superior temporal gyrus, both anterior and posterior. Both neuroimaging and lesion data suggest that single word comprehension activates posterior areas in both hemispheres; whereas connected speech and sentence comprehension activates anterior areas mainly in the left hemisphere. Left inferior frontal areas are densely connected to temporal auditory areas, and it is suggested that they “recover” meaning when the speech is difficult to hear. How does the brain process auditory streams and speech in noisy environments?

How does the brain process auditory streams? Bee MA & Micheyl C (2008). The cocktail party problem: What is it? How can it be solved? And why should animal behaviourists study it? J Comparative Psych, 122, Bigger frequency separation between A and B => more likely that two streams heard => greater activation in primary auditory cortex Wilson EC, Melcher JR, Micheyl C, Gutschalk A & Oxenham AJ (2007). Cortical fMRI activation to sequences of tones alternating in frequency: Relationship to perceived rate and streaming. J Neurophysiol, 97, ) Stream segregation is typically studied with tones tones are either similar in frequency and are thus heard as a single stream, or are different in frequency and are heard as two different streams The reason that the activation for stream segregation is mostly seen in PAC might have to do with use of tones as stimuli the brain activation in response to whether it is a single stream or two streams: in primary auditory cortex (red blob)

Deike S, Gaschler-Markefski B, Brechmann A & Scheich H (2004). Auditory stream segregation relying on timbre involves left auditory cortex. Neuroreport, 15(9), 1511 – Listened to a stream of tones interleaved from TWO instruments: trumpet and organ Listened to stream of tones from ONE instrument: either trumpet or organ More activation when two streams had to be grouped and segregated How does the brain process auditory streams and speech in noisy environments? To do the task (perceive small changes in one instrument’s melody), listeners had to group the melodies of each instrument, this led to increased activity in lateral primary auditory cortex