Download presentation
Presentation is loading. Please wait.
Published byMarjory Payne Modified over 9 years ago
2
Phonetic Context Effects
3
Major Theories of Speech Perception Motor Theory: Specialized module (later version) represents speech sounds in terms of intended gestures through a model of or knowledge of vocal tracts Direct Realism: Perceptual system recovers (phonetically- relevant) gestures by picking up the specifying information in the speech signal. General Approaches: Speech is processed in the same way as other sounds. Representation is a function of the auditory system and experience with language. Explanatory level = gestureExplanatory level = sound
4
Fluent Speech Production adjacent speech becomes more similar The vocal tract is subject to physical constraints... Mass Inertia Coarticulation = Assimilation Radical Context Dependency Also a result of the motor plan
5
An Example Place of Articulation in stops Say /da/Say /ga/ AnteriorPosterior
6
An Example Place of Articulation in stops Say /al/Say /ar/ AnteriorPosterior
7
An Example Place of Articulation in stops Say /al da/ Say /ar da/ Say /al ga/ Say /ar ga/ Place of articulation changes = Coarticulation
8
An Example Place of Articulation in stops Say /ar da/ Say /al ga/ Coarticulation has acoustical consequences
9
/al da/ /ar da/ /al ga/ /ar ga/ * * f t How does the listener deal with this?
10
Speech Perception /ar//al/ /ga/ /da/
11
Identifying in Context Percent “g” Responses [ga] [da] /al/ /ar/ Percent “g” Responses
12
Direction of Effect Production/al/ More /da/-like /ar/ More /ga/-like Perception/al/ /ar/ More /da/-like
13
Perceptual Compensation ForCoarticulation
14
What happens when there is no coarticulation? AT&T Natural Voices Text-To-Speech Engine “ALL DA”“ARE GA”
15
Further Findings 4 ½ month old infants (Fowler et al. 1990) Native Japanese listeners who do not discriminate /al/ from /ar/ (Mann, 1986)
16
“There may exist a universally shared level where representation of speech sounds more closely corresponds to articulatory gestures that give rise to the speech signal.” (Mann, 1986) “Presumably human listeners possess implicit knowledge of coarticulation.” (Repp, 1982) Theoretical Interpretations Motor Theory
17
Major Theories of Speech Perception Motor Theory: “Knowledge” of coarticulation allows perceptual system to compensate for its predicted effects on the speech signal. Direct Realism: Coarticulation is information for the gestures involved. Signal is parsed along the gestural lines. Coart. is assigned to gesture. General Approaches: Those other guys are wrong.
18
Theoretical Interpretations Common Thread: Detailed correspondence between speech production and perception Special Module for Speech Perception Talker-Specific Speech-Specific Two Predictions:
19
Testing Hypothesis #1 Testing Hypothesis #1 Talker-specific Should only compensate for the speech coming from a single speaker
20
Testing Hypothesis #1 Testing Hypothesis #1 Talker-specific Male /al/ Male /ar/ Male /da/ - /ga/ Female /al/ Female /ar/
22
Testing Hypothesis #2 Testing Hypothesis #2 Speech-specific Compensation should only occur for speech sounds /al/ SPEECH TONES /ar/ SPEECH TONES
23
Testing Hypothesis #2
24
Does this rule out motor theory? It may be that the special speech module is broadly tuned. If it acts like speech it went through speech module. If not, not. /al/ /ar/ SPEECH PRECURSORS
26
Training the Quail /da/ /ga/ 1267 /al//ar/ Withheld from training 534 534 Withheld from training CV series varying in F3 onset frequency
27
Context-Dependent Speech Perception by an avian species Normalized Response (Pecks or “GA” Responses)
28
Conclusions General auditory processes play a substantive role in maintaining perceptual compensation for coarticulation 3 Links to speech production are not necessary 1 Neither speech-specific nor species-specific Learning is not necessary 2 Quail had no experience with covariation
29
Major Theories of Speech Perception Motor Theory: “Knowledge” of coarticulation allows perceptual system to compensate for its predicted effects on the speech signal. Direct Realism: Coarticulation is information for the gestures involved. Signal is parsed along the gestural lines. Coart. is assigned to gesture. General Approaches: General Auditory Processes GAP
30
Effects of Context a familiar example How well does this analogy hold up for context effects in speech?
31
Effects of Context a familiar example
32
/al da/ /ar da/ /al ga/ /ar ga/ * * f t
33
Hypothesis: Spectral Contrast the case of [ar] Time Production Production F3 assimilated toward lower frequency Frequency /ar da/ Perception Perception F3 is perceived as a higher frequency F3 Step Percent /ga/ Responses
34
Evidence for General Approach
35
The Empire Strikes Back
36
Fowler, et al. (2000) video audio Visual cue: face “AL” or “AR” Ambiguous precursorTest syllable: /ga/-/da/ series Precursor conditions differed only in visual information
37
Results of Fowler, et al. (2000) More /ga/ responses when video cued /al/
39
Experiment 1: Results No context effect on test syllable –F(1,8) = 3.2, p =.111 %ga responses by condition for 9 participants
41
A closer look… 2 videos: /alda/ /arda/ Video information during test syllable presentation Should be the same in both conditions
42
…more consistent with /ga/? …more consistent with /da/? /alda/ video/arda/ video
44
Results
45
Comparisons
46
Conclusions Spectral contrast is best current account 3 No evidence of visually mediated phonetic context effect 1 No evidence that gestural information is required 2 But what about backwards effects???
47
The Stimulus Paradigm Time Target Speech Stimulus /da-ga/ Noise Burst (/t/ or /k/) Sine-wave Tone Context (High or Low Freq) Got Dot LowHigh Gawk Dock Time (ms) Frequency (Hz)
48
Speaker Normalization Ladefoged & Broadbent (1957) CARRIER SENTENCE “Please say what this word is…” Original, F1 , F1 TARGET “bit”, “bet”, “bat”, “but” + TARGET acoustics were constant TARGET perception shifted with changes in “speaker” Spectral characteristics of the preceding sentence predicted perception ‘Talker/Speaker Normalization’ Sensitivity to Accent, Etc.
49
Experiment Model /ga/ /da/ 19 Time Natural speech F2 & F3 onset edited to create 9-step series Varying perceptually from /ga/ to /da/ Speech Token 589 ms
50
No Effect of Adjacent Context with intermediate spectral characteristics Time Silent Interval 50 ms Standard Tone 70 ms Speech Token 589 ms 2300 Hz PILOT TEST: No context effect on speech perception (t (9) =1.35, p=.21)
51
Acoustic Histories ACOUSTIC HISTORY: The critical context stimulus for these experiments is not a single sound, but a distribution of sounds 21 70-ms tones, sampled from a distribution 30-ms silent interval between tones Time Acoustic History 2100 ms Silent Interval 50 ms Standard Tone 70 ms Speech Token 589 ms
52
Acoustic History Distributions Tone Frequency (Hz) 33001300230028001800 1 Frequency of Presentation High Mean = 2800 Hz 33001300230028001800 1 Low Mean = 1800 Hz
53
Example Stimuli Time (ms) Frequency (Hz) AB 2800 Hz Mean 1800 Hz Mean
54
Characteristics of the Context Context is not local Standard tone immediately precedes each stimulus, independent of condition. On its own, this has no effect of context on /ga/-/da/ stimuli. Context is defined by distribution characteristics Sampling of the distribution varies on each trial Precise acoustic characteristics vary with trial Context unfolds over a broad time course Time Acoustic History 2100 ms Silent Interval 50 ms Standard Tone 70 ms Speech Token 589 ms
55
Results p<.0001 Contrastive
56
Notched Noise Histories Time (ms) Frequency (Hz) A B 100 Hz BW for each notch
57
Results TonesNotched Noise p<0.04 p<0.01 N=10
58
Joint Effects? Time Acoustic History 2100 ms Silent Interval 50 ms Standard Tone 70 ms Speech Token 589 ms Time Acoustic History 2100 ms Silent Interval 50 ms /al/ or /ar/ 300 ms Speech Token 589 ms Conflicting e.g., High Mean + /ar/ Cooperating e.g., High Mean + /al/
59
Interaction of Speech/N.S. p=.007 p<.0001 p=.009 Significantly greater than speech alone Same magnitude as Speech Only, opposite direction Follows NS spectra
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.