Speech Perception in Infant and Adult Brains

Slides:



Advertisements
Similar presentations
Tom Lentz (slides Ivana Brasileiro)
Advertisements

Psych 156A/ Ling 150: Acquisition of Language II Lecture 5 Sounds of Words.
09/01/10 Kuhl et al. (1992) Presentation Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., & Lindblom, B. (1992) Linguistic experience alters.
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Psych 156A/ Ling 150: Acquisition of Language II Lecture 3 Sounds.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
Development of Speech Perception. Issues in the development of speech perception Are the mechanisms peculiar to speech perception evident in young infants?
Psych 156A/ Ling 150: Acquisition of Language II Lecture 4 Sounds.
J. Gandour, PhD, Linguistics B. Chandrasekaran, MS, Speech & Hearing J. Swaminathan, MS, Electrical Engineering Long term goal is to understand the nature.
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
Phonetic Detail in Developing Lexicon Daniel Swingley 2010/11/051Presented by T.Y. Chen in 599.
Psych 156A/ Ling 150: Acquisition of Language II
Psych 156A/ Ling 150: Acquisition of Language II Lecture 4 Sounds of Words.
SPEECH PERCEPTION The Speech Stimulus Perceiving Phonemes Top-Down Processing Is Speech Special?
Acoustic Continua and Phonetic Categories Frequency - Tones.
Chapter three Phonology
Psycholinguistics Lecture 7
Phonetics, day 2 Oct 3, 2008 Phonetics 1.Experimental a. production b. perception 2. Surveys/Interviews.
Conditioned allophony in speech perception: An MEG study Mary Ann Walter & Valentine Hacquard Dept. of Linguistics &
Correlating Consonant Confusability and Neural Responses: An MEG Study Valentine Hacquard 1 Mary Ann Walter 1,2 1 Department of Linguistics and Philosophy,
The Perception of Speech
A Lecture about… Phonetic Acquisition Veronica Weiner May, 2006.
Psych 156A/ Ling 150: Psychology of Language Learning
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
Background Infants and toddlers have detailed representations for their known vocabulary items Consonants (e.g., Swingley & Aslin, 2000; Fennel & Werker,
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Infant Speech Perception & Language Processing. Languages of the World Similar and Different on many features Similarities –Arbitrary mapping of sound.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 5 Sounds III.
Statistical learning, cross- constraints, and the acquisition of speech categories: a computational approach. Joseph Toscano & Bob McMurray Psychology.
Jiwon Hwang Department of Linguistics, Stony Brook University Factors inducing cross-linguistic perception of illusory vowels BACKGROUND.
Adaptive Design of Speech Sound Systems Randy Diehl In collaboration with Bjőrn Lindblom, Carl Creeger, Lori Holt, and Andrew Lotto.
Is phonetic variation represented in memory for pitch accents ? Amelia E. Kimball Jennifer Cole Gary Dell Stefanie Shattuck-Hufnagel ETAP 3 May 28, 2015.
Acoustic Continua and Phonetic Categories Frequency - Tones.
Acoustic Cues to Laryngeal Contrasts in Hindi Susan Jackson and Stephen Winters University of Calgary Acoustics Week in Canada October 14,
Growing up Bilingual: One System or Two? Language differentiation and speech perception in infancy.
1. Background Evidence of phonetic perception during the first year of life: from language-universal listeners to native listeners: Consonants and vowels:
LATERALIZATION OF PHONOLOGY 2 DAY 23 – OCT 21, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
3308 First Language acquisition Acquisition of sounds Perception Sook Whan Cho Fall, 2012.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 6 Sounds of Words I.
Electrophysiological Processing of Single Words in Toddlers and School-Age Children with Autism Spectrum Disorder Sharon Coffey-Corina 1, Denise Padden.
The long-term retention of fine- grained phonetic details: evidence from a second language voice identification training task Steve Winters CAA Presentation.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 3 Sounds II.
Electrophysiology as a Brain Measure of Perceptual Sensitivity and Abstraction.
Acoustic Continua and Phonetic Categories Frequency - Tones.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
CSD 2230 INTRODUCTION TO HUMAN COMMUNICATION DISORDERS Normal Sound Perception, Speech Perception, and Auditory Characteristics at the Boundaries of the.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 2 Sounds I.
Source of change –Combination of feedback and explain- experimenter’s-reasoning led to greater learning than feedback alone Path of change –Children relied.
A Psycholinguistic Perspective on Child Phonology Sharon Peperkamp Emmanuel Dupoux Laboratoire de Sciences Cognitives et Psycholinguistique, EHESS-CNRS,
Sound Categories Frequency - Tones Frequency - Complex Sounds.
AUDITORY CORTEX 4 SEPT 21, 2015 – DAY 12 Brain & Language LING NSCI Fall 2015.
Psycholinguistics I LING 640 What is psycholinguistics about?
Näätänen et al. (1997) Language-specific phoneme representations revealed by electric and magnetic brain responses. Presented by Viktor Kharlamov September.
Levels of Representation in Adult Speech Perception
Language Perception.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 3 Sounds I.
AUDITORY CORTEX 1 SEPT 11, 2015 – DAY 8 Brain & Language LING NSCI Fall 2015.
Speech Perception in Infants Peter D. Eimas, Einar R. Siqueland, Peter Jusczyk, and James Vigorito 1971.
Effects of Musical Experience on Learning Lexical Tone Categories
Magnetoencephalography (MEG) and its role
EEG Definitions EEG1: electroencephalogram—i.e., the “data”
Quantifying Sensitivity
Sound Categories.
Presentation transcript:

Speech Perception in Infant and Adult Brains Colin Phillips Cognitive Neuroscience of Language Laboratory Department of Linguistics University of Maryland

Overview of Talks 1. The Unification Problem 2. Building Syntactic Relations 3. Abstraction: Sounds to Symbols 4. Linguistics and Learning どの生徒に…

University of Delaware Evniki Edgar Bowen Hui with help from ... University of Maryland Shani Abada Sachiko Aoshima Daniel Garcia-Pedrosa Ana Gouvea Nina Kazanina Moti Lieberman Leticia Pablos David Poeppel Beth Rabbin Silke Urban Carol Whitney University of Delaware Evniki Edgar Bowen Hui Baris Kabak Tom Pellathy Dave Schneider Kaia Wong Alec Marantz, MIT Elron Yellin, MIT National Science Foundation James S. McDonnell Foundation Human Frontiers Science Program Japan Science & Technology Program Kanazawa Institute of Technology

Sensory Maps Internal representations of the outside world. Cellular neuroscience has discovered a great deal in this area.

Encoding of Symbols: Abstraction But most areas of linguistics (phonology, morphology, syntax, semantics) are concerned with symbolic, abstract representations, ...which do not involve internal representations of dimensions of the outside world. …hence, the notion of sensory maps does not get us very far into language

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

A Category

Another Category 3 III

Categories for Computation Membership in a category like “bird” is a graded property Membership in a category like “three” is an all-or-nothing property “Three” can be part of a symbolic computation “Bird” cannot

Abstraction Benefits of abstraction Costs of abstraction representational economy representational freedom allow combinatorial operations Costs of abstraction distant from experience - impedes learning distant from experience - impedes recognition

Phonetic vs. Phonological Categories Phonetic category membership is graded Phonological category membership is an all-or-nothing property: all members are equal Phonological categories are the basis of storage of lexical forms Phonological categories participate in a wide variety of combinatorial computations

Timing - Voicing

Voice Onset Time (VOT) 60 msec

Perceiving VOT ‘Categorical Perception’

Discrimination Same/Different

Discrimination Same/Different 0ms 60ms

Discrimination Same/Different 0ms 60ms Same/Different

Discrimination Same/Different 0ms 60ms Same/Different 0ms 10ms

Discrimination Same/Different 0ms 60ms Same/Different 0ms 10ms

Discrimination Same/Different 0ms 60ms Same/Different 0ms 10ms

Discrimination Same/Different 0ms 60ms Same/Different Why is this pair difficult? 0ms 10ms Same/Different 40ms 40ms

Discrimination Same/Different 0ms 60ms Same/Different Why is this pair difficult? 0ms 10ms (i) Acoustically similar? (ii) Same Category? Same/Different 40ms 40ms

Discrimination A More Systematic Test Same/Different 0ms 60ms Why is this pair difficult? 0ms 10ms (i) Acoustically similar? (ii) Same Category? Same/Different 40ms 40ms

Discrimination A More Systematic Test Same/Different 0ms 60ms 0ms 20ms

Discrimination A More Systematic Test D D D T T T Same/Different 0ms 60ms 0ms 20ms D 20ms 40ms T Same/Different 0ms 10ms T T 40ms 60ms Same/Different Within-Category Discrimination is Hard 40ms 40ms

Cross-language Differences

Cross-language Differences

Cross-Language Differences English vs. Japanese R-L

Cross-Language Differences English vs. Hindi alveolar [d] retroflex [D] ?

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

Development of Speech Perception Unusually well described in past 30 years Learning theories exist, and can be tested… Jakobson’s suggestion: children add feature contrasts to their phonological inventory during development Roman Jakobson, 1896-1982 Kindersprache, Aphasie und allgemeine Lautgesetze, 1941

Developmental Differentiation Universal Phonetics Native Lg. Phonetics Native Lg. Phonology 0 months 6 months 12 months 18 months

#1 - Infant Categorical Perception Eimas, Siqueland, Jusczyk & Vigorito, 1971

Discrimination A More Systematic Test D D D T T T Same/Different 0ms 60ms 0ms 20ms D 20ms 40ms T Same/Different 0ms 10ms T T 40ms 60ms Same/Different Within-Category Discrimination is Hard 40ms 40ms

English VOT Perception To Test 2-month olds Not so easy! High Amplitude Sucking Eimas et al. 1971

General Infant Abilities Infants’ show Categorical Perception of speech sounds - at 2 months and earlier Discriminate a wide range of speech contrasts (voicing, place, manner, etc.) Discriminate Non-Native speech contrasts e.g., Japanese babies discriminate r-l e.g., Canadian babies discriminate d-D

Universal Listeners Infants may be able to discriminate all speech contrasts from the languages of the world!

How can they do this? Innate speech-processing capacity? General properties of auditory system?

What About Non-Humans? Chinchillas show categorical perception of voicing contrasts!

#2 - Becoming a Native Listener Werker & Tees, 1984

When does Change Occur? About 10 months Janet Werker U. of British Columbia Conditioned Headturn Procedure

When does Change Occur? Hindi and Salish contrasts tested on English kids Janet Werker U. of British Columbia Conditioned Headturn Procedure

What has Werker found? Is this the beginning of efficient memory representations (phonological categories)? Are the infants learning words? Or something else?

6-12 Months: What Changes?

Structure Changing Patricia Kuhl U. of Washington

#3 - What, no minimal pairs? Stager & Werker, 1997

A Learning Theory… How do we find out the contrastive phonemes of a language? Minimal Pairs

Word Learning Stager & Werker 1997 ‘bih’ vs. ‘dih’ and ‘lif’ vs. ‘neem’

Word learning results Exp 2 vs 4

Why Yearlings Fail on Minimal Pairs They fail specifically when the task requires word-learning They do know the sounds But they fail to use the detail needed for minimal pairs to store words in memory !!??

One-Year Olds Again One-year olds know the surface sound patterns of the language One-year olds do not yet know which sounds are used contrastively in the language… …and which sounds simply reflect allophonic variation One-year olds need to learn contrasts

Maybe not so bad after all... Children learn the feature contrasts of their language Children may learn gradually, adding features over the course of development Phonetic knowledge does not entail phonological knowledge Roman Jakobson, 1896-1982

Summary of Development Memory Surface 10 months Memory 18 months Constructed Lexical Surface Auditory Phonetic Articulatory Innate

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

KIT-Maryland MEG System (160 SQUID detectors) Brain Magnetic Fields (MEG) KIT-Maryland MEG System (160 SQUID detectors)

Brain Magnetic Fields (MEG) SQUID detectors measure brain magnetic fields around 100 billion times weaker than earth’s steady magnetic field.

pickup coil & SQUID assembly 160 SQUID whole-head array

It’s safe…

Origin of the signal - noninvasive measurement - direct measurement. skull CSF tissue MEG EEG B orientation of magnetic field recording surface scalp current flow - noninvasive measurement - direct measurement.

How small is the signal? Biomagnetism Intensity of magnetic signal(T) Earth field EYE (retina) Steady activity Evoked activity LUNGS Magnetic contaminants LIVER Iron stores FETUS Cardiogram LIMBS Steady ionic current BRAIN (neurons) Spontaneous activity Evoked by sensory stimulation SPINAL COLUMN (neurons) HEART Cardiogram (muscle) Timing signals (His Purkinje system) GI TRACK Stimulus response Magnetic contaminations MUSCLE Under tension Intensity of magnetic signal(T) Urban noise Contamination at lung Heart QRS Biomagnetism Fetal heart Muscle Spontaneous signal (a-wave) Signal from retina Evoked signal Intrinsic noise of SQUID

Electroencephalography (EEG/ERP)

Event-Related Potentials (ERPs) John is laughing. s1 s2 s3

Mismatch Response X X X X X Y X X X X Y X X X X X X Y X X X Y X X X...

Mismatch Response X X X X X Y X X X X Y X X X X X X Y X X X Y X X X...

Mismatch Response X X X X X Y X X X X Y X X X X X X Y X X X Y X X X... Latency: 150-250 msec. Localization: Supratemporal auditory cortex Many-to-one ratio between standards and deviants

Localization of Mismatch Response (Phillips, Pellathy, Marantz et al., 2000)

Basic MMN elicitation © Risto Näätänen

Basic MMN elicitation MMN P300 Näätänen et al. 1978

MMN Amplitude Variation Sams et al. 1985

How does MMN latency, amplitude vary with frequency difference How does MMN latency, amplitude vary with frequency difference? 1000Hz tone std. Tiitinen et al. 1994

Different Dimensions of Sounds Length Amplitude Pitch …you name it … Amplitude of mismatch response can be used as a measure of perceptual distance

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

‘Vowel Space’

Näätänen et al. (1997) e e/ö ö õ o

Place of Articulation [bæ] Formant Transition Cues [dæ]

Place of Articulation Sharma et al. 1993

When does Change Occur? Hindi and Salish contrasts tested on English kids Janet Werker U. of British Columbia

6-12 Months: What Changes?

Structure Changing Patricia Kuhl U. of Washington

Place of Articulation Non-native continuum b -- d -- D 3 contrasts Native b -- d Non-native d -- D Non-phonetic b1 -- b5 Conflicting results!

Place of Articulation Non-native continuum b -- d -- D 3 contrasts Native b -- d Non-native d -- D Non-phonetic b1 -- b5 Conflicting results! Dehaene-Lambertz 1997

Place of Articulation Non-native continuum b -- d -- D 3 contrasts Native b -- d Non-native d -- D Non-phonetic b1 -- b5 Conflicting results! Rivera-Gaxiola et al. 2000

Place of Articulation Non-native continuum b -- d -- D 3 contrasts Native b -- d Non-native d -- D Non-phonetic b1 -- b5 Conflicting results! Tsui et al. 2000

Interim Conclusion MMN/MMF is a sensitive measure of discrimination In some - but not all - cases, MMN amplitude tracks native language discrimination patterns When MMN fails to show native language category effects… could reflect that MMN accesses only low-level acoustic representations could reflect that MMN accesses multiple levels of representation, but response is dominated by acoustic representation These studies all implicate phonetic categories

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

Objective Isolate phonological categories, not phonetic categories

A Category

Another Category 3 III

Auditory Cortex Accesses Phonological Categories: An MEG Mismatch Study Colin Phillips, Tom Pellathy, Alec Marantz, Elron Yellin, et al. Journal of Cognitive Neuroscience, 2000

Voice Onset Time (VOT) 60 msec

Categorical Perception

Design Fixed Design - Discrimination 20ms 40ms 60ms 0ms 8ms 16ms 24ms Grouped Design - Categorization

Phonological Features in Auditory Cortex Colin Phillips Tom Pellathy Alec Marantz

Sound Groupings (Phillips, Pellathy & Marantz 2000)

Phonological Features Phonological Natural Classes exist because... Phonemes are composed of features - the smallest building blocks of language Phonemes that share a feature form a natural class Effect of Feature-based organization observed in… Language development Language disorders Historical change Synchronic processes Roman Jakobson, 1896-1982

Voicing in English

Japanese - Rendaku s  z ts  z t  d k  g f  b [-voice]  [+voice] take + sao  takezao cito + tsuma  citozuma hon + tana  hondana yo: + karasi  yo:garasi asa + furo  asaburo Second member of compound word s  z ts  z t  d k  g f  b [-voice]  [+voice]

Sound Groupings in the Brain pæ, tæ, tæ, kæ, dæ, pæ, kæ, tæ, pæ, kæ, bæ, tæ... (Phillips, Pellathy & Marantz 2000)

Sound Groupings in the Brain pæ, tæ, tæ, kæ, dæ, pæ, kæ, tæ, pæ, kæ, bæ, tæ... (Phillips, Pellathy & Marantz 2000)

Feature Mismatch: Stimuli (Phillips, Pellathy & Marantz 2000)

Feature Mismatch Design (Phillips, Pellathy & Marantz 2000)

Sound Groupings in the Brain pæ tæ tæ kæ dæ pæ kæ tæ pæ kæ bæ tæ ... (Phillips, Pellathy & Marantz 2000)

Sound Groupings in the Brain pæ tæ tæ kæ dæ pæ kæ tæ pæ kæ bæ tæ ... – – – – [+voi] (Phillips, Pellathy & Marantz 2000)

Sound Groupings in the Brain pæ tæ tæ kæ dæ pæ kæ tæ pæ kæ bæ tæ ... – – – – [+voi] – – – – – [+voi] – … (Phillips, Pellathy & Marantz 2000)

Sound Groupings in the Brain pæ tæ tæ kæ dæ pæ kæ tæ pæ kæ bæ tæ ... – – – – [+voi] – – – – – [+voi] – … Voiceless phonemes are in many-to-one ratio with [+voice] phonemes No other many-to-one ratio in this sequence Notice, there is no many-to-one ratio in this sequence unless we appeal to features, here [+voice] So, it means that if there is a MM between devi and standards, it’s due to the feature [+voice]. In other words, if your brain is surprised to head B’s, d’s and g’s, it’s because these sounds were rare and because all of them share this feature [+voice] that was absent in the frequent/standard sounds. (Phillips, Pellathy & Marantz 2000)

Feature Mismatch (Phillips, Pellathy & Marantz 2000)

Feature Mismatch Left Hemisphere Right Hemisphere (Phillips, Pellathy & Marantz 2000)

Control Experiment - ‘Acoustic Condition’ Identical acoustical variability No phonological many-to-one ratio (Phillips, Pellathy & Marantz 2000)

Feature Mismatch (Phillips, Pellathy & Marantz 2000)

Hemispheric Contrast in MMF Studies of acoustic and phonetic contrasts consistently report bilateral mismatch responses Paavilainen, Alho, Reinikainen et al. 1991; Näätänen & Alho, 1995; Levänen, Ahonen, Hari et al. 1996; Alho, Winkler, Escera et al. 1998; Ackermann, Lutzenberger & Hertrich, 1999; Opitz, Mecklinger, von Cramon et al. 1999, etc., etc. Striking difference in our finding of a left-hemisphere only mismatch response elicited by phonological feature contrast Our studies probe a more abstract level of phonological representation

Colin Phillips Shani Abada Daniel Garcia-Pedrosa Nina Kazanina EEG Measures of Discrimination and Categorization of Speech Sound Contrasts Colin Phillips Shani Abada Daniel Garcia-Pedrosa Nina Kazanina

Design Fixed Design - Discrimination 20ms 40ms 60ms 0ms 8ms 16ms 24ms Grouped Design - Categorization

Voice Onset Time (VOT) Mismatch Negativity (MMN): An ERP study MMN: acoustic or perceptual phenomenon? Does an across-category distinction (20ms VOT /da/, 40ms VOT /ta/ evoke a greater MMN than a within-category distinction (40ms VOT /ta/, 60ms VOT /ta/)? Sharma & Dorman (1999): MMN only across categories; MMN represents perceptual, not physical, difference between stimuli; Double N100 for long VOT ‘Oddball’ paradigm - 7:1 ratio of standards to deviants Fixed condition standard VOT = 20ms, deviant VOT = 40 ms (across), or standard VOT = 40ms, deviant VOT = 60ms (within) Grouped condition no specific standard VOT, but 7/8 fall into either /da/ or /ta/ 20st,40dv 40st,60dv Dst,Tdv Tst,Ddv

Discrimination and Categorization of Vowels and Tones Daniel Garcia-Pedrosa Colin Phillips

Two Concerns Are the category effects an artifact: it is very hard to discriminate different members of the same category on a voicing scale subjects are forming ad hoc groupings of sounds during the experiment, and are not using their phonological representations?

Discrimination A More Systematic Test D D D T T T Same/Different 0ms 60ms 0ms 20ms D 20ms 40ms T Same/Different 0ms 10ms T T 40ms 60ms Same/Different Within-Category Discrimination is Hard 40ms 40ms

Vowels Vowels show categorical perception effects in identification tasks …but vowels show much better discriminability of within-category pairs

Method: Materials Tones: 290Hz, 300Hz, 310 Hz…470Hz Vowels First formant (F1) varies along the same 290-470Hz continuum F0, F2, voicing onset, etc. all remain constant

Method: Procedure Subject’s category boundary determined by pretest Grouped mismatch paradigm Standard stimulus (7/8) = 4 exemplars from one category Deviant stimulus (1/8) = 4 exemplars from other category MMN response therefore = deviance from a category, not from a single stimulus Tones and vowels presented in separate blocks

Results: Vowels

Results: Vowels

Results: Tones

Results: Tones

Preliminary conclusions MMN appears about 150ms post-stimulus in vowel but not in tone condition Higher amplitude N100 for deviants in both conditions. Is this evidence for categorization of tones or just the result of habituation? Acoustic differences may be responsible for greater N100, while categorization elicits the MMN

Phonemic vs. Allophonic Contrasts Nina Kazanina Colin Phillips in progress

Cross-Language Differences Focus on meaning-relevant sound contrasts Russian d t Korean d t

Cross-Language Differences Focus on meaning-relevant sound contrasts Russian d t Korean d t

Cross-Language Differences Focus on meaning-relevant sound contrasts Russian d t Korean d t …ada ada ada ada ada ada ata ada ada ada ata…

EXECTIVE SUITE

Phonology - Syllables Japanese versus French Pairs like “egma” and “eguma” Difference is possible in French, but not in Japanese

Behavioral Results Japanese have difficulty hearing the difference Dupoux et al. 1999

ERP Results Sequences: egma, egma, egma, egma, eguma French have 3 mismatch responses Early, middle, late Japanese only have late response Dehaene-Lambertz et al. 2000

ERP Results - 2 Early response Dehaene-Lambertz et al. 2000

ERP Results - 3 Middle response Dehaene-Lambertz et al. 2000

ERP Results - 4 Late response Dehaene-Lambertz et al. 2000

Cross-language Differences Thai speakers: Thai *words*: [da] [ta] DIFFERENT English *words*: [daz] [taz] SAME Imsri (2001)

Varying Pronunciations Voiceless stops /p, t, k/ Aspirated at start of syllable; unaspirated after [s] pit spit bit tack stack dack

Outline Categories & Abstraction Speech Perception in Infancy Electrophysiology: Mismatch Paradigm Speech Discrimination in Adult Brains Speech Categorization in Adult Brains Conclusion

Conclusion Sound representations involve (multiple degrees of) abstraction Different levels of representation develop independently from 0-18 months of age Although much is known about course of development, many open questions about how change proceeds Possibility of a connection between adult electrophysiology and infant developmental findings