Correlating Consonant Confusability and Neural Responses: An MEG Study Valentine Hacquard 1 Mary Ann Walter 1,2 1 Department of Linguistics and Philosophy,

Slides:



Advertisements
Similar presentations
Tom Lentz (slides Ivana Brasileiro)
Advertisements

All slides © S. J. Luck, except as indicated in the notes sections of individual slides Slides may be used for nonprofit educational purposes if this copyright.
Analysis of Spoken Language Department of General & Comparative Linguistics Christian-Albrechts-Universität zu Kiel Oliver Niebuhr 1 Vowel.
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Sounds that “move” Diphthongs, glides and liquids.
Plasticity, exemplars, and the perceptual equivalence of ‘defective’ and non-defective /r/ realisations Rachael-Anne Knight & Mark J. Jones.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Early auditory novelty processing in humans: auditory brainstem and middle-latency responses Slabu L, Grimm S, Costa-Faidella J, Escera C.
Effects of Competence, Exposure, and Linguistic Backgrounds on Accurate Production of English Pure Vowels by Native Japanese and Mandarin Speakers Malcolm.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Experiment 2: MEG Study Materials and Methods: 11 right-handed subjects with 20:20 vision were run. 3 subjects’ data was discarded because of poor performance.
Phonetic Similarity Effects in Masked Priming Marja-Liisa Mailend 1, Edwin Maas 1, & Kenneth I. Forster 2 1 Department of Speech, Language, and Hearing.
Comparing Thompson’s Thatcher effect with faces and non-face objects Elyssa Twedt 1, David Sheinberg 2 & Isabel Gauthier 1 Vanderbilt University 1, Brown.
Subdural Grid Intracranial electrodes typically cannot be used in human studies It is possible to record from the cortical surface Subdural grid on surface.
Attention as Information Selection. Early Selection Early Selection model postulated that attention acted as a strict gate at the lowest levels of sensory.
TEMPLATE DESIGN © Listener’s variation in phoneme category boundary as a source of sound change: a case of /u/-fronting.
The auditory cortex mediates the perceptual effects of acoustic temporal expectation Santiago Jaramillo & Anthony M Zador Cold Spring Harbor Laboratory,
SPEECH PERCEPTION The Speech Stimulus Perceiving Phonemes Top-Down Processing Is Speech Special?
Speech Perception Richard Wright Linguistics 453.
Phonetics, day 2 Oct 3, 2008 Phonetics 1.Experimental a. production b. perception 2. Surveys/Interviews.
Conditioned allophony in speech perception: An MEG study Mary Ann Walter & Valentine Hacquard Dept. of Linguistics &
Stimulus-Specific Adaptation in Auditory Cortex Is an NMDA-Independent Process Distinct from the Sensory Novelty Encoded by the Mismatch Negativity.
Auditory-acoustic relations and effects on language inventory Carrie Niziolek [carrien] may 2004.
Closed and Open Electrical Fields
Measuring the brain’s response to temporally modulated sound stimuli Chloe Rose Institute of Digital Healthcare, WMG, University of Warwick, INTRODUCTION.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Visual Word Form Recognition: An MEG study using Masked Priming Heejeong Ko 1, Michael Wagner 1, Linnaea Stockall 1, Sid Kouider 2, Alec Marantz 1 1 Department.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Electrophysiological evidence for the role of animacy and lexico-semantic associations in processing nouns within passive structures Martin Paczynski 1,
A MEG study of the neural basis of context-dependent speech categorization Erika J.C.L. Taylor 1, Lori L. Holt 1, Anto Bagic 2 1 Department of Psychology.
UNIVERSITY OF CRETE DEPARTMENT OF MEDICINE INTRAPARTMENTAL GRATUATE PROGRAMM IN THE BRAIN AND MIND SCIENCES YEAR SUBJECT: CERBRAL CORTEX PROFESSOR:
5aSC5. The Correlation between Perceiving and Producing English Obstruents across Korean Learners Kenneth de Jong & Yen-chen Hao Department of Linguistics.
Acoustic Cues to Laryngeal Contrasts in Hindi Susan Jackson and Stephen Winters University of Calgary Acoustics Week in Canada October 14,
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Perceptual distance in Norwegian retroflexion Sverre Stausland Johnsen Phon circle, MIT Nov
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
SEPARATION OF CO-OCCURRING SYLLABLES: SEQUENTIAL AND SIMULTANEOUS GROUPING or CAN SCHEMATA OVERRULE PRIMITIVE GROUPING CUES IN SPEECH PERCEPTION? William.
Distributed Representative Reading Group. Research Highlights 1Support vector machines can robustly decode semantic information from EEG and MEG 2Multivariate.
Acoustic Continua and Phonetic Categories Frequency - Tones.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Expertise, Millisecond by Millisecond Tim Curran, University of Colorado Boulder 1.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
AUDITORY CORTEX 4 SEPT 21, 2015 – DAY 12 Brain & Language LING NSCI Fall 2015.
Perceptual distance & sound change GSAS workshop on historical linguistics Oct
Näätänen et al. (1997) Language-specific phoneme representations revealed by electric and magnetic brain responses. Presented by Viktor Kharlamov September.
Levels of Representation in Adult Speech Perception
Nuclear Accent Shape and the Perception of Syllable Pitch Rachael-Anne Knight LAGB 16 April 2003.
Motor Theory of Perception March 29, 2012 Tidbits First: Guidelines for the final project report So far, I have two people who want to present their.
A STUDY ON PERCEPTUAL COMPENSATION FOR / /- FRONTING IN A MERICAN E NGLISH Reiko Kataoka February 14, 2009 BLS 35.
Does the brain compute confidence estimates about decisions?
Magnetoencephalography (MEG) and its role
EEG Definitions EEG1: electroencephalogram—i.e., the “data”
contrastive linguistics
S. Kramer1, K. Tucker1, A.L. Moro1, E. Service1, J.F. Connolly1
University of Silesia Acoustic cues for studying dental fricatives in foreign-language speech Arkadiusz Rojczyk Institute of English, University of Silesia.
Figure 1. In utero RNAi of Kiaa0319 (KIA−) caused delayed speech-evoked LFPs in both awake and anesthetized rats. LFPs in panels (A) and (C) were created.
Chap 14 Perceptual and linguistic phonetics
contrastive linguistics
Perceptual Echoes at 10 Hz in the Human Brain
Dynamic Causal Modelling for M/EEG
Machine Learning for Visual Scene Classification with EEG Data
Athanassios G Siapas, Matthew A Wilson  Neuron 
contrastive linguistics
contrastive linguistics
Presentation transcript:

Correlating Consonant Confusability and Neural Responses: An MEG Study Valentine Hacquard 1 Mary Ann Walter 1,2 1 Department of Linguistics and Philosophy, KIT-MIT MEG Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA, 2 National Science Foundation Fellowship

Nasal Confusability Numerous behavioral studies have investigated error rates in identification of phonemes masked with noise. These show that nasal consonants are more confusable with each other than oral consonants; e.g. m/n vs b/d (Miller & Nicely 1955, Wang & Bilger 1973).

Consequences Such perceptibility asymmetries motivate much current work in phonology and (it’s claimed) phonological processes (Hume 1998, Steriade 1999). Unsurprisingly, given this stance, Mohanan (1993) finds that nasals are particularly susceptible to place assimilation.

An Acoustic Model These confusability and assimilation facts, as well as studies on offline similarity judgments (Hura et al. 1992), support a model of similarity based on acoustic, perceptual factors. context-dependent, but inventory-independent Although nasality itself is highly salient, persevering nasality masks the F2 transition into the following vowel, which is the primary cue for place of articulation.

Another Proposal Frisch et al. (1997) and Frisch (1996) propose a metric in which similarity is computed according to natural classes: Similarity = shared natural classes shared + unshared natural classes context-independent, but inventory-dependent English: b/d (=.29) > m/n (=.28), where 1 is identity. So, nasals equally or less similar to each other than orals.

The Question Does a difference in similarity correlate with a difference in an auditory brain response? If so, brain data can be used to substantiate proposals for similarity metrics, as well as their internal organization.

MAGNETOENCEPHALOGRAPHY (MEG) MEG measures the magnetic fields (B) generated by electrical activity in the brain: specifically, by potentials in the apical dendrites of pyramidal cells in cortex To input Coil of SQUID cortex Scalp

Earth field Urban noise Contamination at lung Heart ( QRS ) Muscle Fetal heart Spontaneous signal (  -wave) Signal from retina Intrinsic noise of SQUID Intensity of magnetic signal(T) Evoked signal SCALE The magnetic field of the brain, recorded with MEG, is 100 billion times weaker than the earth’s magnetic field!

MMF: mismatch field: automatic, auditory brain response evoked by a deviant stimulus following a sequence of standards, peaking ~ ms post-stimulus onset. MMF M100 M100: automatic auditory evoked response that peaks ~100 ms post-stimulus onset

Origin of signal in auditory cortex MMF LOCALIZATION Note left-hemisphere concentration of the mismatch field with linguistic stimuli.

Properties of the MMF Sharma & Dorman (1999) and Phillips et al. (2000) show that the same VOT span crossing a phonemic category boundary evokes a far greater MMF than one that doesn’t. Näätänen et al. (1997) show that a small acoustic difference crossing a phonemic category boundary evokes a far greater MMF than a large one that doesn’t. Phonological difference outweighs acoustic difference. Do similarity distinctions matter when category is kept constant? 10 ms VOT span within category 10 ms VOT span across category MMF A M P L I T U D E 50 HZ F2 span within category 10 HZ F2 span across category

Procedure Oddball paradigm: Conditions (8x30=240) 1) ba (400 ms) ba (400 ms) ba (400 ms) ba (400 ms) dadeviant 2) da (400 ms) da (400 ms) da (400 ms) da (400 ms) dastandard 3) da (400 ms) da (400 ms) da (400 ms) da (400 ms) badeviant 4) ba (400 ms) ba (400 ms) ba (400 ms) ba (400 ms) bastandard 5) ma (400 ms) ma (400 ms) ma (400 ms) ma (400 ms) nadeviant 6) na (400 ms) na (400 ms) na (400 ms) na (400 ms) nastandard 7) na (400 ms) na (400 ms) na (400 ms) na (400 ms) madeviant 8) ma (400 ms) ma (400 ms) ma (400 ms) ma (400 ms) mastandard Subjects (n=16) made same-different button-press judgments. Synthesized stimuli (4 tokens) were presented in six blocks of 40 trials, randomly ordered, with self-regulated breaks in between.

Predictions According to an acoustic-based similarity framework, the MMF-baseline gap should be larger for oral consonant pairs than for nasals. According to a natural-class-based one such as Frisch’s, it should be the opposite, or equivalent. If abstract phonological features are the only relevant factor in perceptibility at this stage, the gaps should be equivalent.

Behavioral Results – Error Rate Deviants overall received significantly more errors than standards (p=.0009). No effect for manner was observed (p=.3538).  Cf. lack of masking noise or filtering, comparatively small number of trials.

Behavioral Results – Reaction Time RT to deviants overall was significantly faster than to standards (p<0001). (Some subjects reported a waiting strategy in standard trials.) RT to nasals was significantly faster than to orals (p=.0197).

MMF Results Oral Standard Oral Deviant single subject

Deviants have significantly greater amplitude in MMF window than standards (p<.0001). Orals have significantly greater amplitude in MMF window than nasals (p<.0001). But nasals have significantly greater M100 amplitude than orals (p<.0001). MMF Results selected sensors RMS waves

Nasal Deviant Oral Deviant MMF Comparison: Nasal/Oral single subject

MMF Comparison The MMF/baseline gap is significantly greater for oral consonant pairs than for nasals (p=.0399). single subject

Conclusions Oral mismatches elicit a stronger MMF than nasal mismatches. So oral consonants are perceived as more different from each other than nasal ones at this stage. Phonological categories are not the only relevant factors in perception at this stage: acoustic similarity also plays a role. Finally, it is an acoustic-based similarity metric that appears to be operating at this time period, rather than a natural-class one, or feature-counting.

For Future Research Our results suggest that the MMF can be used to test proposals about degree of perceptual distance in acoustic-based similarity frameworks. Our next study will isolate Frisch-style similarity as a variable, testing the same phonological contrast with speakers of languages that have the contrast, but whose inventories differ in other ways.

THANK YOU ! Alec MarantzDiana Sonnenreich Donca SteriadeKaren Froud Linnaea StockallPranav Anand Ben Bruening, Elissa Flagg, Vivian Lin you, the audience