Results 1.Boundary shift Japanese vs. English perceptions Korean vs. English perceptions 1.Category boundary was shifted toward boundaries in listeners’

Slides:



Advertisements
Similar presentations
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Advertisements

{ “Age” Effects on Second Language Acquisition Examination of 4 hypotheses related to age and language learning
Sounds that “move” Diphthongs, glides and liquids.
Plasticity, exemplars, and the perceptual equivalence of ‘defective’ and non-defective /r/ realisations Rachael-Anne Knight & Mark J. Jones.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Speech Productions of French- English Bilingual Speakers in Western Canada Nicole Netelenbos Fangfang Li.
Voice Onset Time (VOT) An Animated and Narrated Glossary of Terms used in Linguistics presents.
Function words are often reduced or even deleted in casual conversation (Fig. 1). Pairs may neutralize: he’s/he was, we’re/we were What sources of information.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
Interlanguage Production of English Stop Consonants: A VOT Analysis Author: Liao Shu-jong Presenter: Shu-ling Hung (Sherry) Advisor: Raung-fu Chung Date:
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
Splice: From vowel offset to vowel onset FIG 3. Example of stimulus spliced from the repetitive syllables. EXPERIMENT 2 (Voicing ID) METHOD Speech materials:
Development of Speech Perception. Issues in the development of speech perception Are the mechanisms peculiar to speech perception evident in young infants?
Psych 156A/ Ling 150: Acquisition of Language II Lecture 4 Sounds.
Speech and speaker normalization (in vowel normalization)
Perception of syllable prominence by listeners with and without competence in the tested language Anders Eriksson 1, Esther Grabe 2 & Hartmut Traunmüller.
Sentence Durations and Accentedness Judgments ABSTRACT Talkers in a second language can frequently be identified as speaking with a foreign accent. It.
PHONETICS AND PHONOLOGY
Identification and discrimination of the relative onset time of two component tones: Implications for voicing perception in stops David B. Pisoni ( )
TEMPLATE DESIGN © Perceptual compensation for /u/-fronting in American English KATAOKA, Reiko Department.
TEMPLATE DESIGN © Listener’s variation in phoneme category boundary as a source of sound change: a case of /u/-fronting.
SPEECH PERCEPTION The Speech Stimulus Perceiving Phonemes Top-Down Processing Is Speech Special?
What is Phonetics? Short answer: The study of speech sounds in all their aspects. Phonetics is about describing speech. (Note: phonetics ¹ phonics) Phonetic.
Chapter three Phonology
Auditory-acoustic relations and effects on language inventory Carrie Niziolek [carrien] may 2004.
GABRIELLA RUIZ LING 620 OHIO UNIVERSITY Cross-language perceptual assimilation of French and German front rounded vowels by novice American listeners and.
METHODS Recording experiments Talkers: Sixty talkers participated in the recording. They are subdivided into 12 groups by their age group (Young, Middle-age,
Distinguishing Evidence Accumulation from Response Bias in Categorical Decision-Making Vincent P. Ferrera 1,2, Jack Grinband 1,2, Quan Xiao 1,2, Joy Hirsch.
Phonemics LIN 3201.
Present Experiment Introduction Coarticulatory Timing and Lexical Effects on Vowel Nasalization in English: an Aerodynamic Study Jason Bishop University.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Infant Speech Perception & Language Processing. Languages of the World Similar and Different on many features Similarities –Arbitrary mapping of sound.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 5 Sounds III.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Jiwon Hwang Department of Linguistics, Stony Brook University Factors inducing cross-linguistic perception of illusory vowels BACKGROUND.
Adaptive Design of Speech Sound Systems Randy Diehl In collaboration with Bjőrn Lindblom, Carl Creeger, Lori Holt, and Andrew Lotto.
Is phonetic variation represented in memory for pitch accents ? Amelia E. Kimball Jennifer Cole Gary Dell Stefanie Shattuck-Hufnagel ETAP 3 May 28, 2015.
Results Tone study: Accuracy and error rates (percentage lower than 10% is omitted) Consonant study: Accuracy and error rates 3aSCb5. The categorical nature.
A prosodically sensitive diphone synthesis system for Korean Kyuchul Yoon Linguistics Department The Ohio State University.
5aSC5. The Correlation between Perceiving and Producing English Obstruents across Korean Learners Kenneth de Jong & Yen-chen Hao Department of Linguistics.
Acoustic Cues to Laryngeal Contrasts in Hindi Susan Jackson and Stephen Winters University of Calgary Acoustics Week in Canada October 14,
1. Background Evidence of phonetic perception during the first year of life: from language-universal listeners to native listeners: Consonants and vowels:
SPEECH PERCEPTION DAY 16 – OCT 2, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Segmental encoding of prosodic categories: A perception study through speech synthesis Kyuchul Yoon, Mary Beckman & Chris Brew.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Sensation & Perception
The long-term retention of fine- grained phonetic details: evidence from a second language voice identification training task Steve Winters CAA Presentation.
The New Normal: Goodness Judgments of Non-Invariant Speech Julia Drouin, Speech, Language and Hearing Sciences & Psychology, Dr.
Acoustic Continua and Phonetic Categories Frequency - Tones.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
B10. On Segmental Factorability in Second Language Learning Kenneth de Jong and Noah Silbert Linguistics and Cognitive Science, Indiana University (kdejong.
Bosch & Sebastián-Gallés Simultaneous Bilingualism and the Perception of a Language-Specific Vowel Contrast in the First Year of Life.
Speech Perception.
Phonetics: consonants
Bridging the gap between L2 speech perception research and phonological theory Paola Escudero & Paul Boersma (March 2002) Presented by Paola Escudero.
Author: Flege, J. (1986) Presenter: Shu-ping Chuang (Erin) Advisor: Rung-fu, Chung Date: March 1 st, 2012.
Welcome to All S. Course Code: EL 120 Course Name English Phonetics and Linguistics Lecture 1 Introducing the Course (p.2-8) Unit 1: Introducing Phonetics.
Effects of Musical Experience on Learning Lexical Tone Categories
17th International Conference on Infant Studies Baltimore, Maryland, March 2010 Language Discrimination by Infants: Discriminating Within the Native.
Sentence Durations and Accentedness Judgments
The 157th Meeting of Acoustical Society of America in Portland, Oregon, May 21, pSW35. Confusion Direction Differences in Second Language Production.
A perceptual investigation of prosodic accuracy in children with typical language and specific language impairment Peter Richtsmeier
What is Phonetics? Short answer: The study of speech sounds in all their aspects. Phonetics is about describing speech. (Note: phonetics ¹ phonics) Phonetic.
Linguistic Relativity: Evidence from Native Korean and English Speakers and Factors Affecting Its Extent.
Speech Perception.
Brain Mechanisms in Early Language Acquisition
Vincent Porretta & Benjamin V. Tucker University of Alberta
Topic: Language perception
Presentation transcript:

Results 1.Boundary shift Japanese vs. English perceptions Korean vs. English perceptions 1.Category boundary was shifted toward boundaries in listeners’ native language for both Japanese and Korean listeners. 2.The shift is more apparent for the monolingual listener groups than the advanced ESL learner groups. 3.Both monolingual groups showed less steep boundaries. 4.Learners of English could have a difficulty perceiving the voicing distinction, even though previous studies find little errors with clear lab-style speech. 2. Rate effects Native vs. Non-native perceptions 5.Rate normalization appeared in all groups. Methods Listeners Non-native listeners (L2=English) Japanese listeners Advanced ESL learners of Japanese (Adv J): 14 native speakers of Japanese in the US. Monolingual Japanese (Mono J): 20 native speakers of Japanese in Japan. Korean listeners Advanced ESL learners of Korean (Adv K): 14 native speakers of Korean in the US. Monolingual Korean (Mono K): 20 native speakers of Korean in Korea. Native listeners (Control) 18 speakers of American English Stimuli Speakers 4 Speakers of American English From the Northern Midwest All in late 20’s Speech corpus Speakers repeated /bi/ and /pi/ with increasing rate. Rates were controlled by a metronome. 21 stimuli were spliced from each repetitive utterance. Each stimulus consists of three repeated syllables. Measurements VOT and Syllable duration were measured. VOT and Syllable duration for each stimulus were based on the middle syllable in the stimulus. Procedure Task: 4 forced-choice identification test (‘bee’, ‘pea’, ‘eeb’, and ‘eep’) (‘bi’, ‘pi’, ‘ib’, and ‘ip’ for Japanese) Analysis : VOT and Syllable duration of the middle syllable were used for a logistic regression analysis. Results (Cont.) 3. Distributional Analysis: Distributions of Deviations of Adv. ESL from Native Listeners Japanese listeners Japanese listeners more likely to mislabel /b/ as /p/. (Red tokens flow into /b/ productions at bottom). Deviations occur across the entire distribution. Some errors in /p/ distribution as well. Korean Listeners Koreans are more likely to mislabel /p/ as /b/. (Black tokens in /p/ region at top) Surprisingly, some /b/ are more accurately identified by Koreans than by native listeners. Deviations occur across the whole distribution. Some errors in /b/ distribution as well. Summary Non-native and native listeners show same rate normalization effects. Experiment provides evidence that rate normalization is not a general auditory mechanism. It is based on the distribution of consonants that the listeners have experienced. Even contrasts believed to be ‘easy’ could be misperceived when non-natives encounter rate-related variation. Distribution of deviations across the entire range of tokens suggests a generalized criterion function, which gets shifted for the second language. However, the large number of error increases in the wrong direction (/p/ -> /b/ for Japanese, and /b/ -> /p/ for Korean) indicate another aspect of second language perception: generalized uncertainty. A43. Learning Variation in a Second Language: A Cross-language Study of Rate-normalization Kyoko Nagao †, Kenneth de Jong †, and Byung-jin Lim ‡ † Indiana University ‡University of Wisconsin Introduction Specific Issues How do non-native perceivers cope with variability in second language categories? Non-native listeners have been reported to have particularly large intelligibility deficits under non-optimal (non-laboratory) listening conditions. What is the nature of phonological categories? How do they get modified in the face of experience with a new language? Types of Variation in Speech Categories Bio-physical variations: Variation due to the nature of the articulatory system (e.g. acoustic details of formant transitions, etc). All experienced listeners have access to this regardless of native language. Linguistic variation: Variation due to language (e.g., phonologically specified allophonic alternations, etc). Listeners must acquire the ability to account for these when learning a second language. -For example, VOT at the boundary to distinguish voicing categories differs across languages (e.g. English vs. Japanese vs. Korean). Speech Rate/Tempo Distinction of voicing categories is rate-sensitive because VOT, the strongest voicing cue, is influenced by speaking rate. Listeners account for the rate effects to extract the voicing categories. Research Questions How do second language learners respond to highly rate-varied tokens in judging non-native voicing categories? How is this effect modulated by increased experience with the second language? Hypothesis If rate-induced variations are accounted for by a general auditory mechanism, we would expect no deviations from native perceptions in non-native perceptions. If normalization is based on the distribution of consonants that the listeners have experienced, we expect deviations from native perceptions. This deviation is less if the listeners have experience of consonant distributions in both native and non- native languages. If the internal structure of phonological categories is based on prototype matching, deviations should be most clear in less extreme rates. If the internal structure of phonological categories is distributional, deviations of non-native responses would be observed across the entire distributions. Presented at the 1st ASA Workshop on Second Language Speech Learning in Vancouver, Canada on May 13, 2005 Downloadable from FIG 3. Percent /p/ perception of native listeners predicted by a logistical regression model as a function for VOT continua with a syllable duration of 125 msec. FIG 2. Distributions of VOT values for various syllable durations for English /b/ (blue) and /p/ (red). The straight line is the estimated perceptual boundary for English listeners. [Reprinted from Nagao & de Jong, 2003.] References Nagao, K., & de Jong, K. (2003). Perceptual rate normalization in naturally produced bilabial stops, Presented at the ASA meeting in Austin, TX. Volaitis, L. E., & Miller, J. L. (1992). Phonetic prototypes: Influence of place of articulation and speaking rate on the internal structure of voicing categories. Journal of the Acoustical Society of America, 92(2), FIG 6b. Distributions of VOT values for various syllable durations for /b/ (squares) and /p/ (diamonds). Outlining of the token symbols indicates a 20% difference between English listeners and Korean listeners. More /p/ -> /b/ Errors Fewer /p/ -> /b/ Errors Fewer /b/ -> /p/ Errors More /b/ -> /p/ Errors /pi/ productions /bi/ productions FIG 5. VOT values at the estimated category boundary for each listener group as a function for syllable duration (msec). ListenersAge (years old)Duration stayed in the US (months) Mono. Jap (M=32.5)Almost none Adv. Jap (M=24.4)3-84 (M=40) Mono. Korean19-25 (M=23)Almost none Adv. Korean24-31 (M=28)3-72 (M=40) English native18-23 (M=20)- Acknowledgements This work is supported by the NIDCD (grant #R03 DC A2) and by the NSF (grant #BCS ). We thank Professor Minoru Fukuda of Miyazaki Municipal University and Professor Jin-young Tak of Sejong University for their support in conducting perception experiments in Japan and Korea. More /p/ -> /b/ Errors Fewer /p/ -> /b/ Errors Fewer /b/ -> /p/ Errors More /b/ -> /p/ Errors /pi/ productions /bi/ productions VOT (ms) Syllable Duration (ms) FIG 6a. Distributions of VOT values for various syllable durations for /b/ (squares) and /p/ (diamonds). Outlining of the token symbols indicates a 20% difference between English listeners and Japanese listeners. [b][ph] [p’] [p] [ph] [b] [p] Korean VOT continuum (ms) FIG 1. Schematized figures showing differences in VOT values for Korean, Japanese, and English stops. English Japanese Syllable duration (msec) VOT (msec) Eng Adv J Adv K Mono J Mono K Predicted percent /p/ responses Eng Adv J Mono J VOT (ms) Predicted percent /p/ responses Eng Adv K Mono K VOT (ms) L2 Criterion (English) L1 Criterion (Japanese) /p/ Rate /b/ Tokens affected by criterion mismatch throughout distribution