Auditory Neuroscience - Lecture 6 Hearing Speech auditoryneuroscience.com/lectures.

Slides:



Advertisements
Similar presentations
Chapter 4: The Visual Cortex and Beyond
Advertisements

Chapter 12: Speech and Music Perception
Auditory Neuroscience - Lecture 3 Periodicity and Pitch
Plasticity, exemplars, and the perceptual equivalence of ‘defective’ and non-defective /r/ realisations Rachael-Anne Knight & Mark J. Jones.
Chapter 7 Ruben & Stewart (2006). Message Production Every aspect of our behavior (language, tone of voice, appearance, eye contact, actions, use of space.
English Phonetics and Phonology Presented by Sergio A. Rojas.
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Chapter 12 Speech Perception. Animals use sound to communicate in many ways Bird calls Bird calls Whale calls Whale calls Baboons shrieks Baboons shrieks.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Auditory Neuroscience - Lecture 2 Ear and Brain auditoryneuroscience.com/lectures.
Language Comprehension Speech Perception Semantic Processing & Naming Deficits.
Cognitive Neuroscience of Language 1. Premise 1: Constituent Cognitive Processes Phonological analysis Syntactic analysis Semantic analysis Premise 2:
The Neuroscience of Language. What is language? What is it for? Rapid efficient communication – (as such, other kinds of communication might be called.
Speech perception 2 Perceptual organization of speech.
ACOUSTICS OF SPEECH AND SINGING MUSICAL ACOUSTICS Science of Sound, Chapters 15, 17 P. Denes & E. Pinson, The Speech Chain (1963, 1993) J. Sundberg, The.
Writing Workshop Here are some typical writing style issues which people have trouble with.
PHONETICS AND PHONOLOGY
Introduction to Psychology Suzy Scherf Lecture 8: How Do We Know? Sensation and Perception Early Memory.
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
The Human Voice Chapters 15 and 17. Main Vocal Organs Lungs Reservoir and energy source Larynx Vocal folds Cavities: pharynx, nasal, oral Air exits through.
Cortical Encoding of Natural Auditory Scenes Brigid Thurgood.
Language. Using Language What is language for? Using Language What is language for? – Rapid, efficient communication To accomplish this goal, what needs.
Chapter Nine The Linguistic Approach: Language and Cognitive Science.
SPEECH PERCEPTION The Speech Stimulus Perceiving Phonemes Top-Down Processing Is Speech Special?
What is Phonetics? Short answer: The study of speech sounds in all their aspects. Phonetics is about describing speech. (Note: phonetics ¹ phonics) Phonetic.
A.F. Lamme and Pieter R. Roelfsema
Efficient Coding of Natural Sounds Grace Wang HST 722 Topic Proposal.
Language Comprehension Speech Perception Naming Deficits.
Auditory-acoustic relations and effects on language inventory Carrie Niziolek [carrien] may 2004.
Chapter 14: Cognitive Functions. Lateralization of Function Lateralization.
Speech and Language Lecture for the 2 nd BM course Dr Jan Schnupp
The Description of Speech
Hearing Complex Sounds MSc Neuroscience Jan Schnupp MSc Neuroscience Jan Schnupp
Hearing Things How Your Brain Works - Week 5 Dr. Jan Schnupp HowYourBrainWorks.net.
Lecture 2 The Origins of Language 9/19/ The origins of language A famous quote from Charles Darwin (1871) “The suspicion does not appear improbable.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Speech Perception1 Fricatives and Affricates We will be looking at acoustic cues in terms of … –Manner –Place –voicing.
Last Lecture Dichotic Listening Dichotic Listening The corpus callosum & resource allocation The corpus callosum & resource allocation Handedness Handedness.
Language and lateralization Lecture 5 (Chapters 8 and 9)
Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Frank E. Musiek, Ph.D., Jennifer Shinn, M.S., and Christine Hare, M. A.
$ recognition & localization of predators & prey $ feature analyzers in the brain $ from recognition to response $ summary PART 2: SENSORY WORLDS #10:
Review: The Biological Basis of Audition Recanzone and Sutter Presented by Joseph Schilz.
Four lobes of the cerebral cortex FRONTAL LOBE OCCIPITAL LOBE TEMPORAL LOBE PARIETAL LOBE.
1.  What is Speech ?  Speech is complex form of communication in which spoken words convey ideas.  When we speak, first we understand. 2.
Sounds and speech perception Productivity of language Speech sounds Speech perception Integration of information.
Artificial Intelligence 2004 Speech & Natural Language Processing Speech Recognition acoustic signal as input conversion into written words Natural.
Biological Perspective Methodologies. Correlational Studies Psychologists often investigate the relationship between brain and behavior by examining what.
Auditory Neuroscience 1 Spatial Hearing Systems Biology Doctoral Training Program Physiology course Prof. Jan Schnupp HowYourBrainWorks.net.
Acoustic Illusions & Sound Segregation Reading Assignments for Quizette 3 Chapters 10 & 11.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Articulatory Net I.2 Oct 14, 2015 – DAY 21
Language. The system of spoken or written communication used by a particular country, people, community, etc., typically consisting of words used within.
Phonetics and Phonology.
Object and face recognition
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Branches of Linguistics
Acoustic phonetics: Concerned with describing the acoustics of speech. Also called speech acoustics. Big questions: (1) What are the relationships between.
By: Angela D. Friederici Presented By: Karol Krzywon.
Zatorre paper Presented by MaryKate Chester
Oregon Health & Science University
Representational Similarity Analysis
Representational Similarity Analysis
RESULTS AND DISCUSSION Fall Level High-rising Fall Level High-rising
Auditory Neuroscience - Lecture 6 Hearing Speech
Lateralization of Function of the Human Brain
What is Phonetics? Short answer: The study of speech sounds in all their aspects. Phonetics is about describing speech. (Note: phonetics ¹ phonics) Phonetic.
Chapter 2 Phonology.
A ERIK MARYAM.
Presentation transcript:

Auditory Neuroscience - Lecture 6 Hearing Speech auditoryneuroscience.com/lectures

Vocalization in Speech and Animals

Articulation Articulators (lips, tongue, jaw, soft palate) move to change resonance properties of the vocal tract.

Can other animals speak? Other mammals have similar vocal tracts and use them for communication. However, they have only very limited use of syntax (grammar) and very much smaller vocabularies than humans.

Acoustic features of vocalizations: modulations – harmonics & formants

Speech as a Modulated Signal Elliott and Theunissen (2009) PLoS Comput Biol Launch SpectrogramSpectrogram

AN Figure 4.2 Modulation spectra of male and female English speech. From figure 2 of Elliott and Theunissen (2009) PLoS Comput Biol 5:e

Pitch changes done with Hideki Kawahara’s “Straight” Pitch changes done with Hideki Kawahara’s “Straight” ??? I come in peace! Rising Falling

“Spectral Modulation” Harmonics & Formants Harmonics Formants

Formants determine vowel categories AN Fig 4.3 Adapted from figure 7 of Diehl (2008) Phil Trans Royal Soc B formant-artificial-vowels

Formant Tracking & Synthesis

Visual Influences

Visual / Auditory Interactions: The McGurk Effect

Neural Representation of Vocalizations in the Ascending Pathway

Frequency modulations are poorly resolved on the tonotopic axis Aud Neursci Fig 4.4, Based on data by Young & Sachs (1979)

Speech and Cochlear Implants Since tracking a small number of formants is all that is required to extract most of the semantic information of speech, cochlear implants can deliver speech even though they have only few effective frequency channels. /drupal/?q=prosthetics/noise_vo coded_speechhttps://mustelid.physiol.ox.ac.uk /drupal/?q=prosthetics/noise_vo coded_speech

“Modulation tuning” in Thalamus and Cortex AN Figs 4.5 & 4.6 From Miller LM, Escabi MA, Read HL, Schreiner CE (2002) Spectrotemporal receptive fields in the lemniscal auditory thalamus and cortex. J Neurophysiol 87:

Which Temporal Modulations are the Most Important? Elliott and Theunissen (2009) PLoS Comput Biol modulated-signal

Cat cortical modulation transfer functions seem not particularly well matched to the most important modulation frequencies of speech signals. Species differences? Different (“higher order”) cortical areas?

Cortical Specialization for Speech?

Putative Cortical “What” and “Where” Streams AN Fig 4.11 Adapted from Romanski et al. (1999). Nat Neurosci 2:

Human Cortex

Hemispheric “Dominance” for Speech and the Wada test Broca first proposed that the left hemisphere is “dominant” for speech, based on examinations of post-mortem brains. Nowadays “dominance” is usually assessed with the “Wada test” (intracarotid sodium amobarbital procedure): either the left or right brain hemisphere is anesthetised by injection of amobarbital into the carotid through a catheter. The patient’s ability to understand and produce speech is scored.

Left Hemisphere Dominance Dominates Wada test results suggest that: Ca 90% of all right handed patients and ca. 75% of all left handed patients display “left hemisphere dominance” for speech. The remaining patients are either “mixed dominant” (i.e. they need both hemispheres to process speech) or they have a “bilateral speech representation” (i.e. either hemisphere can support speech without necessarily requiring the other). Right hemisphere dominance is comparatively rare, and seen in no more than 1-2% of the population

Hierarchical levels of speech perception Acoustic / phonetic representation:- Can the patient tell whether two speech sounds or syllables presented in succession are the same or different? Phonological analysis:- Can the patient tell whether two words rhyme? Or what the first phoneme (“letter”) in a given word is? Semantic processing:- Can the patient understand “meaning”, e.g. follow spoken instructions?

Human Cortex - Microstimulation AN Fig 4.8: Sites where acoustic (A), phonological (B), or lexical-semantic (C) deficits can be induced by disruptive electrical stimulation. From Boatman (2004) Cognition 92:47-65

Where in the Brain does the Transition from Sound to Meaning happen? We don’t really know. “Ventral vs Dorsal stream hypothesis” of auditory cortex connectivity would suggest that anterior temporal and frontal structures should be involved. This fits with neuroimaging studies (e.g. Scott et al (2000) Brain 123 Pt 12: ) com/?q=node/46 But other electrophysiological and lesion data do not really fit this picture.

Marmoset Twitter Calls as Models for Speech Tokens NR

“Twitter Selectivity” in Marmoset, Cat and Ferret A1 from Wang & Kadia, J Neurophysiol cat marmoset ferret from Schnupp et al. J Neurosci 2006

Cortical representations of vocalizations AN Fig 4.9

Mapping cortical sensitivity to sound features Neural sensitivity Timbre Nelken et al., J Neurophys, 2004 Bizley, Walker, Silverman, King & Schnupp - J Neurosci 2009

Summary Human speech signals carry information mostly in their time-varying formant structure. Formants are initially encoded as time varying activity patterns across the tonotopic array. It is rather difficult to pin down which parts of the brain might translate sound acoustics to “meaning”. There is a clear left hemisphere bias, but evidence for cortical areas with very clear specialization for speech or vocalization processing remains elusive.

Further Reading Auditory Neuroscience – Chapter 6