Download presentation
Presentation is loading. Please wait.
1
Temporal Integration at two time scales
Continuous detail is used in language comprehension and language learning: Temporal Integration at two time scales Bob McMurray University of Iowa Dept. of Psychology
2
The students of the MACLab
Collaborators Richard Aslin Michael Tanenhaus David Gow Joe Toscano Cheyenne Munson Meghan Clayards Dana Subik Julie Markant The students of the MACLab
3
Music perception: long-term dependencies and short term expectancies.
Temporal Integration Temporal integration: a critical problem for cognition. - information never arrives synchronously. Vision: integration across head-movements, saccades and attention-shifts. Music perception: long-term dependencies and short term expectancies.
4
In language, information arrives sequentially.
Partial syntactic and semantic representations are formed as words arrive. The Hawkeyes beat the Illini (once) Words are identified over sequential phonemes.
5
We have a clear understanding of the input (from phonetics).
Spoken Word Recognition is an ideal arena in which to study these issues because: Speech production gives us a lot of rich temporal information to use in this way. We have a clear understanding of the input (from phonetics). The output is easy to measure online (visual world paradigm).
6
Scales of temporal integration in word recognition
A Word: ordered series of articulations. - Build abstract representations. - Form expectations about future events. - Fast (online) processing. A phonology: - Abstract across utterances. - Expectations about possible future events. - Slow (developmental) processing
7
Mechanisms of Temporal Integration
Stimuli do not change arbitrarily. Perceptual cues reveal something about the change itself. Active integration: Anticipating future events Retain partial present representations. Resolve prior ambiguity.
8
Speech perception and Spoken Word Recognition.
Overview Speech perception and Spoken Word Recognition. 2) Lexical activation is sensitive to fine-grained detail in speech. 3) Fast temporal integration: taking advantage of regularity in the signal for temporal integration. 4) Slow temporal integration: Developmental consequences
9
X Online Word Recognition Information arrives sequentially
At early points in time, signal is temporarily ambiguous. X kery bakery bakery ba… basic barrier barricade bait baby Later arriving information disambiguates the word.
10
Current models of spoken word recognition
Immediacy: Hypotheses formed from the earliest moments of input. Activation Based: Lexical candidates (words) receive activation to the degree they match the input. Parallel Processing: Multiple items are active in parallel. Competition: Items compete with each other for recognition.
11
Input: b u… tt… e… r time beach butter bump putter dog
12
Example: subphonemic effects of motor processes.
These processes have been well defined for a phonemic representation of the input. k A g n I S n But considerably less ambiguity if we consider subphonemic information. Example: subphonemic effects of motor processes.
13
Coarticulation Any action reflects future actions as it unfolds. Example: Coarticulation Articulation (lips, tongue…) reflects current, future and past events. Subtle subphonemic variation in speech reflects temporal organization. n n e e t c k Sensitivity to these perceptual details might yield earlier disambiguation.
14
These processes have largely been ignored because of a history of evidence that perceptual variability gets discarded. Example: Categorical Perception
15
B P Categorical Perception
Sharp identification of tokens on a continuum. VOT 100 P B % /p/ ID (%/pa/) Discrimination Discrimination Discrimination poor within a phonetic category. Subphonemic variation in VOT is discarded in favor of a discrete symbol (phoneme).
16
Evidence against the strong form of Categorical Perception from psychophysical-type tasks:
Discrimination Tasks Pisoni and Tash (1974) Pisoni & Lazarus (1974) Carney, Widin & Viemeister (1977) Training Samuel (1977) Pisoni, Aslin, Perey & Hennessy (1982) Goodness Ratings Miller (1997) Massaro & Cohen (1983)
17
Experiment 1 ? Does within-category acoustic detail systematically affect higher level language? Is there a gradient effect of subphonemic detail on lexical activation?
18
McMurray, Aslin & Tanenhaus (2002)
A gradient relationship would yield systematic effects of subphonemic information on lexical activation. If this gradiency is useful for temporal integration, it must be preserved over time. Need a design sensitive to both acoustic detail and detailed temporal dynamics of lexical activation.
19
KlattWorks: generate synthetic continua from natural speech.
Acoustic Detail Use a speech continuum—more steps yields a better picture acoustic mapping. KlattWorks: generate synthetic continua from natural speech. 9-step VOT continua (0-40 ms) 6 pairs of words. beach/peach bale/pale bear/pear bump/pump bomb/palm butter/putter 6 fillers. lamp leg lock ladder lip leaf shark shell shoe ship sheep shirt
21
Temporal Dynamics How do we tap on-line recognition?
With an on-line task: Eye-movements Subjects hear spoken language and manipulate objects in a visual world. Visual world includes set of objects with interesting linguistic properties. a beach, a peach and some unrelated items. Eye-movements to each object are monitored throughout the task. Tanenhaus, Spivey-Knowlton, Eberhart & Sedivy, 1995
22
Why use eye-movements and visual world paradigm?
Relatively natural task. Eye-movements generated very fast (within 200ms of first bit of information). Eye movements time-locked to speech. Subjects aren’t aware of eye-movements. Fixation probability maps onto lexical activation..
23
Task A moment to view the items
25
Task Bear Repeat 1080 times
26
Identification Results
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 High agreement across subjects and items for category boundary. proportion /p/ 5 10 15 20 25 30 35 40 B VOT (ms) P By subject: /- 1.33ms By item: /- 1.24ms
27
Target = Bear Competitor = Pear Unrelated = Lamp, Ship Task
200 ms 1 2 3 4 5 Trials Time % fixations Target = Bear Competitor = Pear Unrelated = Lamp, Ship
28
More looks to competitor than unrelated items.
Task 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 400 800 1200 1600 2000 Time (ms) More looks to competitor than unrelated items. VOT=0 Response= VOT=40 Response= Fixation proportion
29
Task Given that the subject heard bear clicked on “bear”…
How often was the subject looking at the “pear”? Categorical Results Gradient Effect time Fixation proportion time Fixation proportion target target target competitor competitor competitor competitor
30
Results Response= Response= 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 VOT VOT 0 ms 20 ms 5 ms 25 ms 10 ms 30 ms 15 ms 35 ms Competitor Fixations 40 ms 400 800 1200 1600 400 800 1200 1600 2000 Time since word onset (ms) Long-lasting gradient effect: seen throughout the timecourse of processing.
31
B: p=.017* P: p<.001*** Clear effects of VOT Linear Trend
Response= Response= 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Looks to Competitor Fixations Looks to Category Boundary 5 10 15 20 25 30 35 40 VOT (ms) B: p=.017* P: p<.001*** Clear effects of VOT Linear Trend B: p=.023* P: p=.002*** Area under the curve:
32
Unambiguous Stimuli Only
Response= Response= 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Looks to Competitor Fixations Looks to Category Boundary 5 10 15 20 25 30 35 40 VOT (ms) B: p=.014* P: p=.001*** Clear effects of VOT Linear Trend B: p=.009** P: p=.007** Unambiguous Stimuli Only
33
Summary Subphonemic acoustic differences in VOT have gradient effect on lexical activation. Gradient effect of VOT on looks to the competitor. Effect holds even for unambiguous stimuli. Seems to be long-lasting. Consistent with growing body of work using priming (Andruski, Blumstein & Burton, 1994; Utman, Blumstein & Burton, 2000; Gow, 2001, 2002).
34
Sensitivity & Use Word recognition is systematically sensitive to subphonemic acoustic detail. 2) Acoustic detail is represented as gradations in activation across the lexicon. This sensitivity enables the system to take advantage of subphonemic regularities for temporal integration. 4) This has fundamental consequences for development: learning phonological organization.
35
Lexical Sensitivity Word recognition is systematically sensitive to subphonemic acoustic detail. Voicing Laterality, Manner, Place Natural Speech Vowel Quality
36
Lexical Sensitivity Word recognition is systematically sensitive to subphonemic acoustic detail. Voicing Laterality, Manner, Place Natural Speech Vowel Quality ? Non minimal pairs ? Duration of effect (experiment 1)
37
2) Acoustic detail is represented as gradations in activation across the lexicon (a lexical basis vector for speech). Input: b u… m… p… time bump pump dump bun bumper bomb
38
Temporal Integration This sensitivity enables the system to take advantage of subphonemic regularities for temporal integration. Regressive ambiguity resolution (exp 1): Ambiguity retained until more information arrives. Progressive expectation building (exp 2): Phonetic distinctions are spread over time Anticipate upcoming material.
39
Development 4) Consequences for development: learning phonological organization. Learning a language: Integrating input across many utterances to build long-term representation. Sensitivity to subphonemic detail (exp 4 & 5). Allows statistical learning of categories (model). To assimilation
40
? How long are gradient effects of within-category detail maintained?
Experiment 2 ? ? How long are gradient effects of within-category detail maintained? Can subphonemic variation play a role in ambiguity resolution? How is information at multiple levels integrated?
41
Misperception What if initial portion of a stimulus was misperceived? Competitor still active - easy to activate it rest of the way. Competitor completely inactive - system will “garden-path”. P ( misperception ) distance from boundary. Gradient activation allows the system to hedge its bets.
42
/ beIrəkeId / vs. / peIrəkit /
barricade vs. parakeet / beIrəkeId / vs. / peIrəkit / Input: p/b eI r ə k i t… time Categorical Lexicon parakeet barricade parakeet barricade Gradient Sensitivity
43
Methods 10 Pairs of b/p items. Voiced Voiceless Overlap Bumpercar
Pumpernickel 6 Barricade Parakeet 5 Bassinet Passenger Blanket Plankton Beachball Peachpit 4 Billboard Pillbox Drain Pipes Train Tracks Dreadlocks Treadmill Delaware Telephone Delicatessen Television
44
X
45
Faster activation of target as VOTs near lexical endpoint.
Eye Movement Results Barricade -> Parricade 1 VOT 5 10 15 20 25 30 35 0.8 0.6 Fixations to Target 0.4 0.2 300 600 900 Time (ms) Faster activation of target as VOTs near lexical endpoint. --Even within the non-word range.
46
Faster activation of target as VOTs near lexical endpoint.
Eye Movement Results Barricade -> Parricade Parakeet -> Barakeet 300 600 900 1200 Time (ms) 1 VOT 5 10 15 20 25 30 35 0.8 0.6 Fixations to Target 0.4 0.2 300 600 900 Time (ms) Faster activation of target as VOTs near lexical endpoint. --Even within the non-word range.
47
Sparse data but: Experiment 2: Garden Path Analyses b/parricade
Is the latency to switch to the target related to VOT? Accelerated latency = more (residual) target activation Sparse data but: B: p=.002 P: p=.007
48
X Experiment 2 Extensions Replication: longer continua (0-45 ms).
2) Effect holds with non-displayed competitors. X
49
Experiment 2 Conclusions
Gradient effect of within-category variation without minimal-pairs. Gradient effect long-lasting: mean POD = 240 ms. Regressive ambiguity resolution: Subphonemic gradations maintained until more information arrives. Subphonemic gradation can improve (or hinder) recovery from garden path. (to developmental work)
50
Predicts future events.
Progressive Expectation Formation Can within-category detail be used to predict future acoustic/phonetic events? Yes: Phonological regularities create systematic within-category variation. Predicts future events.
51
Place assimilation -> ambiguous segments
Experiment 3: Anticipation Word-final coronal consonants (n, t, d) assimilate the place of the following segment. Maroong Goose Maroon Duck Place assimilation -> ambiguous segments —anticipate upcoming material. time Input: m… a… rr… oo… ng… g… oo… s… maroon goose goat duck
52
Subject hears “select the maroon duck” “select the maroon goose” “select the maroong goose” “select the maroong duck” * We should see faster eye-movements to “goose” after assimilated consonants.
53
Looks to “goose“ as a function of time
Results Looks to “goose“ as a function of time 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 200 400 600 Time (ms) Fixation Proportion Assimilated Non Assimilated Onset of “goose” + oculomotor delay Anticipatory effect on looks to non-coronal.
54
Inhibitory effect on looks to coronal (duck, p=.024)
Onset of “goose” + oculomotor delay 0.3 Assimilated 0.25 Non Assimilated Fixation Proportion 0.2 0.15 0.1 0.05 200 400 600 Time (ms) Looks to “duck” as a function of time Inhibitory effect on looks to coronal (duck, p=.024)
55
Assimilation creates competition
Experiment 3: Extensions Possible lexical locus Green/m Boat Eight/Ape Babies Assimilation creates competition
56
Sensitivity to subphonemic detail:
Increase priors on likely upcoming events. Decrease priors on unlikely upcoming events. Active Temporal Integration Process. Possible lexical mechanism… Occasionally assimilation creates ambiguity Resolves prior ambiguity: mudg drinker Similar to experiment 2…
57
Adult Summary Lexical activation is exquisitely sensitive to within-category detail. This sensitivity is useful to integrate material over time. Regressive Ambiguity resolution. Progressive Facilitation Taking advantage of phonological and lexical regularities.
58
Development Historically, work in speech perception has been linked to development. Sensitivity to subphonemic detail must revise our view of development. Use: Infants face additional temporal integration problems No lexicon available to clean up noisy input: rely on acoustic regularities. Extracting a phonology from the series of utterances.
59
Sensitivity to subphonemic detail:
For 30 years, virtually all attempts to address this question have yielded categorical discrimination (e.g. Eimas, Siqueland, Jusczyk & Vigorito, 1971). Exception: Miller & Eimas (1996). Only at extreme VOTs. Only when habituated to non- prototypical token.
60
Use? Nonetheless, infants possess abilities that would require within-category sensitivity. Infants can use allophonic differences at word boundaries for segmentation (Jusczyk, Hohne & Bauman, 1999; Hohne, & Jusczyk, 1994) Infants can learn phonetic categories from distributional statistics (Maye, Werker & Gerken, 2002; Maye & Weiss, 2004).
61
Statistical Category Learning
Speech production causes clustering along contrastive phonetic dimensions. E.g. Voicing / Voice Onset Time B: VOT ~ 0 P: VOT ~ 40 Within a category, VOT forms Gaussian distribution. VOT 0ms 40ms Result: Bimodal distribution
62
To statistically learn speech categories, infants must:
Record frequencies of tokens at each value along a stimulus dimension. VOT frequency 0ms 50ms Extract categories from the distribution. +voice -voice This requires ability to track specific VOTs.
63
Experiment 4 Why no demonstrations of sensitivity? Habituation Discrimination not ID. Possible selective adaptation. Possible attenuation of sensitivity. Synthetic speech Not ideal for infants. Single exemplar/continuum Not necessarily a category representation Experiment 4: Reassess issue with improved methods.
64
HTPP Head-Turn Preference Procedure (Jusczyk & Aslin, 1995) Infants exposed to a chunk of language: Words in running speech. Stream of continuous speech (ala statistical learning paradigm). Word list. Memory for exposed items (or abstractions) assessed: Compare listening time between consistent and inconsistent items.
65
Test trials start with all lights off.
66
Center Light blinks.
67
Brings infant’s attention to center.
68
One of the side-lights blinks.
69
When infant looks at side-light…
Beach… Beach… Beach… When infant looks at side-light… …he hears a word
70
…as long as he keeps looking.
71
Methods 7.5 month old infants exposed to either 4 b-, or 4 p-words.
80 repetitions total. Form a category of the exposed class of words. Peach Beach Pail Bail Pear Bear Palm Bomb Measure listening time on… VOT closer to boundary Competitors Original words Pear* Bear* Bear Pear
72
Stimuli constructed by cross-splicing naturally produced tokens of each end point.
B: M= 3.6 ms VOT P: M= 40.7 ms VOT B*: M=11.9 ms VOT P*: M=30.2 ms VOT B* and P* were judged /b/ or /p/ at least 90% consistently by adult listeners. B*: 97% P*: 96%
73
Novelty or Familiarity?
Novelty/Familiarity preference varies across infants and experiments. We’re only interested in the middle stimuli (b*, p*). Infants were classified as novelty or familiarity preferring by performance on the endpoints. 12 21 P 16 36 B Familiarity Novelty Within each group will we see evidence for gradiency?
74
bear… beach… bail… bomb… Infants who show a novelty effect…
After being exposed to bear… beach… bail… bomb… Infants who show a novelty effect… …will look longer for pear than bear. What about in between? Categorical Gradient Listening Time Bear Bear* Pear
75
Novelty infants (B: 36 P: 21)
Results Novelty infants (B: P: 21) 10000 9000 8000 Listening Time (ms) 7000 Exposed to: 6000 B P 5000 4000 Target Target* Competitor Target vs. Target*: Competitor vs. Target*: p<.001 p=.017
76
Familiarity infants (B: 16 P: 12)
4000 5000 6000 7000 8000 9000 10000 Target Target* Competitor Listening Time (ms) B P Exposed to: Target vs. Target*: Competitor vs. Target*: P=.003 p=.012
77
Infants exposed to /p/ Novelty N=21 Familiarity N=12 .028* .018*
B .024* .009** 4000 5000 6000 7000 8000 9000 10000 Listening Time (ms) Novelty N=21 P* B 4000 5000 6000 7000 8000 9000 .018* .028* P Listening Time (ms) Familiarity N=12
78
Infants exposed to /b/ Novelty N=36 Familiarity N=16 <.001** >.1
>.2 4000 5000 6000 7000 8000 9000 10000 B B* P Listening Time (ms) Familiarity N=16 4000 5000 6000 7000 8000 9000 10000 B B* P Listening Time (ms) .06 .15
79
Experiment 4 Conclusions
Contrary to all previous work: 7.5 month old infants show gradient sensitivity to subphonemic detail. Clear effect for /p/ Effect attenuated for /b/.
80
Reduced effect for /b/… But:
Bear Pear Listening Time Bear* Expected Result? Bear Pear Listening Time Bear* Null Effect?
81
Category boundary lies between Bear & Bear*
Pear Listening Time Bear* Actual result. Bear* Pear Category boundary lies between Bear & Bear* - Between (3ms and 11 ms) [??] Within-category sensitivity in a different range?
82
Same design as experiment 3.
VOTs shifted away from hypothesized boundary Train Bomb Bear Beach Bale -9.7 ms. Test: Bomb Bear Beach Bale -9.7 ms. Bomb* Bear* Beach* Bale* 3.6 ms. Palm Pear Peach Pail 40.7 ms.
83
Familiarity infants (34 Infants)
=.01** 9000 =.05* 8000 7000 Listening Time (ms) 6000 5000 4000 B- B P
84
Novelty infants (25 Infants)
=.002** 9000 =.02* 8000 7000 Listening Time (ms) 6000 5000 4000 B- B P
85
Experiment 5 Conclusions
Within-category sensitivity in /b/ as well as /p/. Shifted category boundary in /b/: not consistent with adult boundary (or prior infant work). Why?
86
/b/ results consistent with (at least) two mappings.
Category Mapping Strength 1) Shifted boundary VOT Inconsistent with prior literature. Why would infants have this boundary?
87
HTPP is a one-alternative task. Asks: B or not-B not: B or P
VOT Adult boundary /b/ Category Mapping Strength /p/ 2) Sparse Categories unmapped space Hypothesis: Sparse categories: by-product of efficient learning.
88
Distributional learning model
Computational Model Distributional learning model Model distribution of tokens as a mixture of Gaussian distributions over phonetic dimension (e.g. VOT) . 2) After receiving an input, the Gaussian with the highest posterior probability is the “category”. VOT 3) Each Gaussian has three parameters:
89
Statistical Category Learning
1) Start with a set of randomly selected Gaussians. After each input, adjust each parameter to find best description of the input. Start with more Gaussians than necessary--model doesn’t innately know how many categories. -> 0 for unneeded categories. VOT VOT
91
Overgeneralization large costly: lose phonetic distinctions…
92
Undergeneralization small not as costly: maintain distinctiveness.
93
To increase likelihood of successful learning:
err on the side of caution. start with small 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 60 Starting P(Success) 2 Category Model 39,900 Models Run 3 Category Model
94
Avg Sparseness Coefficient
Small .5-1 Sparseness coefficient: % of space not strongly mapped to any category. Unmapped space VOT 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 2000 4000 6000 8000 10000 12000 Training Epochs Avg Sparseness Coefficient Starting
95
Avg Sparsity Coefficient
Start with large σ VOT 0.4 Starting .5-1 0.35 0.3 0.25 20-40 Avg Sparsity Coefficient 0.2 0.15 0.1 0.05 2000 4000 6000 8000 10000 12000 Training Epochs
96
Avg Sparsity Coefficient
Intermediate starting σ VOT 0.4 Starting .5-1 0.35 3-11 0.3 12-17 0.25 20-40 Avg Sparsity Coefficient 0.2 0.15 0.1 0.05 2000 4000 6000 8000 10000 12000 Training Epochs
97
Model Conclusions To avoid overgeneralization… …better to start with small estimates for Small or even medium starting ’s lead to sparse category structure during infancy—much of phonetic space is unmapped. Sparse categories: Similar to exp 2: Retain ambiguity (and partial representations) until more input is available. Need to integrate fast (dynamical systems) online processing to account for adult data. to conclusions
98
Anticipatory Eye Movements (McMurray & Aslin, 2005)
AEM Paradigm Examination of sparseness/completeness of categories needs a two alternative task. Anticipatory Eye Movements (McMurray & Aslin, 2005) Infants are trained to make anticipatory eye movements in response to auditory or visual stimulus. Post-training, generalization can be assessed with respect to both targets. bear pail Also useful with Color Shape Spatial Frequency Orientation Faces QuickTime Demo
99
Anticipatory Eye Movements Train: Bear0: Left Pail35: Right
Experiment 6 palm Anticipatory Eye Movements Train: Bear0: Left Pail35: Right Test: Bear0 Pear40 Bear5 Pear35 Bear10 Pear30 Bear15 Pear25 Same naturally-produced tokens from Exps 4 & 5. beach
100
Expected results Sparse categories Bear Pail VOT VOT VOT
Adult boundary unmapped space Bear Pail Performance VOT VOT VOT
101
Results % Correct: 67% Training Tokens { 9 / 16 Better than chance.
0.25 0.5 0.75 1 10 20 30 40 VOT % Correct Beach Palm
102
Infant Summary Infants show graded sensitivity to subphonemic detail.
/b/-results: regions of unmapped phonetic space. Statistical approach provides support for sparseness. Given current learning theories, sparseness results from optimal starting parameters. Empirical test will require a two-alternative task. AEM: train infants to make eye-movements in response to stimulus identity.
103
Conclusions Infant and adults sensitive to subphonemic detail. Sensitivity is important to adult and developing word recognition systems. 1) Short term cue integration. 2) Long term phonology learning. In both cases… Partially ambiguous material is retained until more data arrives. Partially active representations anticipate likelihood of future material
104
Spoken language is defined by change.
Conclusions Spoken language is defined by change. But the information to cope with it is in the signal—if we look online. Within-category acoustic variation is signal, not noise.
105
Within-Category Variation is Used in Spoken Word Recognition
Temporal Integration at Two Time Scales Bob McMurray University of Iowa Dept. of Psychology
106
Head-Tracker Cam Monitor IR Head-Tracker Emitters Eyetracker Computer Subject Computers connected via Ethernet Head 2 Eye cameras
107
Laterality, Manner, Place Natural Speech Vowel Quality
Lexical Sensitivity Word recognition is systematically sensitive to subphonemic acoustic detail. Voicing Laterality, Manner, Place Natural Speech Vowel Quality Metalinguistic Tasks 5 10 15 20 25 30 35 40 VOT (ms) Category Boundary 0.02 0.04 0.06 0.08 0.1 Response=B Looks to B Response=P Competitor Fixations
108
Laterality, Manner, Place Natural Speech Vowel Quality
Lexical Sensitivity Word recognition is systematically sensitive to subphonemic acoustic detail. Voicing Laterality, Manner, Place Natural Speech Vowel Quality Metalinguistic Tasks 0.02 0.04 0.06 0.08 0.1 Response=P Looks to B Competitor Fixations Response=B Looks to B Category Boundary 5 10 15 20 25 30 35 40 VOT (ms)
109
Misperception: Additional Results
110
10 Pairs of b/p items. 0 – 35 ms VOT continua. 20 Filler items (lemonade, restaurant, saxophone…) Option to click “X” (Mispronounced). 26 Subjects 1240 Trials over two days.
111
Identification Results
1.00 0.90 0.80 0.70 Significant target responses even at extreme. Graded effects of VOT on correct response rate. 0.60 Voiced Response Rate 0.50 Voiceless 0.40 NW 0.30 0.20 0.10 0.00 5 10 15 20 25 30 35 Barricade Parricade 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 5 10 15 20 25 30 35 Voiced Voiceless NW Barakeet Parakeet Response Rate
112
Phonetic “Garden-Path”
“Garden-path” effect: Difference between looks to each target (b vs. p) at same VOT. VOT = 0 (/b/) 0.2 0.4 0.6 0.8 1 500 1000 Time (ms) Fixations to Target Barricade Parakeet VOT = 35 (/p/) 500 1000 1500 Time (ms)
113
Target GP Effect: Gradient effect of VOT. Target: p<.0001
-0.1 -0.05 0.05 0.1 0.15 5 10 15 20 25 30 35 VOT (ms) Garden-Path Effect ( Barricade - Parakeet ) Target GP Effect: Gradient effect of VOT. Target: p<.0001 Competitor: p<.0001 -0.1 -0.08 -0.06 -0.04 -0.02 0.02 0.04 0.06 5 10 15 20 25 30 35 VOT (ms) Garden-Path Effect ( Barricade - Parakeet ) Competitor
114
Assimilation: Additional Results
115
runm picks runm takes *** When /p/ is heard, the bilabial feature can be assumed to come from assimilation (not an underlying /m/). When /t/ is heard, the bilabial feature is likely to be from an underlying /m/.
116
Exp 3 & 4: Conclusions Within-category detail used in recovering from assimilation: temporal integration. Anticipate upcoming material Bias activations based on context - Like Exp 2: within-category detail retained to resolve ambiguity.. Phonological variation is a source of information.
117
“select the mud drinker” “select the mudg gear”
Subject hears “select the mud drinker” “select the mudg gear” “select the mudg drinker Critical Pair
118
Mudg Gear is initially ambiguous with a late bias towards “Mud”.
Onset of “gear” Avg. offset of “gear” (402 ms) 0.45 0.4 0.35 0.3 Fixation Proportion 0.25 0.2 0.15 Initial Coronal:Mud Gear Initial Non-Coronal:Mug Gear 0.1 0.05 200 400 600 800 1000 1200 1400 1600 1800 2000 Time (ms) Mudg Gear is initially ambiguous with a late bias towards “Mud”.
119
0.1 0.2 0.3 0.4 0.5 0.6 200 400 600 800 1000 1200 1400 1600 1800 2000 Time (ms) Fixation Proportion Initial Coronal: Mud Drinker Initial Non-Coronal: Mug Drinker Onset of “drinker” Avg. offset of “drinker (408 ms) Mudg Drinker is also ambiguous with a late bias towards “Mug” (the /g/ has to come from somewhere).
120
In the same stimuli/experiment there is also a progressive effect!
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 200 400 600 Time (ms) Fixation Proportion Assimilated Non Assimilated Onset of “gear” Looks to non-coronal (gear) following assimilated or non-assimilated consonant. In the same stimuli/experiment there is also a progressive effect!
121
Non-parametric approach?
VOT Categories Competitive Hebbian Learning (Rumelhart & Zipser, 1986). Not constrained by a particular equation—can fill space better. Similar properties in terms of starting and sparseness.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.