Presentation is loading. Please wait.

Presentation is loading. Please wait.

Two /b/ or not “too bee”: Gradient sensitivity to subphonemic variation, categorical perception and the effect of task. Bob McMurray Michael K. Tanenhaus.

Similar presentations


Presentation on theme: "Two /b/ or not “too bee”: Gradient sensitivity to subphonemic variation, categorical perception and the effect of task. Bob McMurray Michael K. Tanenhaus."— Presentation transcript:

1 Two /b/ or not “too bee”: Gradient sensitivity to subphonemic variation, categorical perception and the effect of task. Bob McMurray Michael K. Tanenhaus Michael J. Spivey Richard N. Aslin Dana Subik With thanks to

2 Invariance, Covariance and Gradient Sensitivity in speech perception. Categorical Perception and other previous research. Experiment 1: Gradient sensitivity in Word Recognition Experiment 2-5: The effect of experimental task Targets & competitors, gradient sensitivity and temporal dynamics, Conclusions Outline

3 Problem of Invariance Phonetic features are correlated with many acoustic realizations. Acoustic realization of a phonetic feature depends on context. How do we extract invariant linguistic representations from a variable acoustic signal? What properties of the signal provide an invariant mapping to linguistic representations? How do we extract discrete units from a graded signal?

4 Problem of Invariance Two Solutions Motor Theory: acoustic invariance does not exist, but specialized mechanisms allow us to unpack speech into invariant motor representations (Liberman & Mattingly, 1985; Fowler, 1986). Acoustic Invariance: better computational methods and neurologically inspired models may find invariant acoustic properties of the signal (Blumstein,1998; Sussman et al, 1998)

5 The Fundamental Approach How do we pay attention to the right parts of the signal and ignore the variation? However, recent work suggests that this the variation is actually highly informative covariation. Rethinking Invariance

6 In measurements of productions, effects of speaking rate on VOT (e.g. Kessinger & Blumstein) prosodic domain and VOT and articulatory strength (Fougeron and Keating) Place of articulation and vowel quality 5 syllables away (Local) Between-consonant coarticulation (Mann & Repp) suggest that a system sensitive to fine grained detail could take advantage of all of this information. Rethinking Invariance

7 Speech perception shows probabilistic effects of many information sources: Lexical Context Spectral vs. Temporal Cues Visual Information Transition Statistics Speech RateStimulus Naturalness Sentential ContextCompensatory Coarticulation EmbeddingsSyllabic Stress Lexical StressPhrasal Stress A system that was sensitive to fine-grained acoustic detail might be much more efficient than one that was not. Tracking covariance may help solve the problem of invariance. Rethinking Invariance

8 What sort of sensitivity is needed? Gradient Sensitivity: As fundamentally graded acoustic information changes (even changes that still result in the same “category”), activation of lexical or sublexical representation changes monotonically. Activation of linguistic units reflects the probability that a that unit is instantiated by the acoustic signal.

9 Categorical Perception CP suggests listeners do not show gradient sensitivity to subphonemic information. Sharp identification of speech sounds on a continuum ID (%/pa/) VOT 0 100 PB % /p/ B P Discrimination Discrimination poor within a phonetic category

10 Evidence for Categorical Perception Supported by: Work on VOT and place of articulation. Ubiquity of steep identification functions. Recent electrophysiological data (e.g. Philips, Pellathy, Marantz, Yellin, Wexler, Poeppel, McGinnis & Roberts, 2000; Sharma & Dorman, 1999)

11 Revisiting Categorical Perception? Evidence against CP comes from Discrimination Tasks Pisoni and Tash (1974) Pisoni & Lazarus (1974) Carney, Widin & Viemeister (1977) Training Samuel (1977) Pisoni, Aslin, Perey & Hennessy (1982) Goodness Ratings Miller (1997)Massaro & Cohen, 1983 Only goodness ratings show any hint of gradiency. No gradient effects from identification tasks. But, 2AFC metalinguistic tasks may underestimate sensitivity to subphonemic acoustic information

12 Lexical sensitivity Andruski, Blumstein & Burton (1994) Created stimuli that were either voiceless, 1/3 or 2/3 voiced. 2/3 voiced stimuli primed semantic associates more weakly than fully voiceless or 1/3 voiced tokens First demonstration of lexical sensitivity to natural variation in consonants. However: 2/3 voiced stimuli were close to category boundary. No evidence for gradiency—difference between 2 items. Hard to interpret temporal dynamics in priming tasks.

13 Remaining Questions Is sensitivity to subphonemic differences gradient? Is it symmetrical (I.e. gradiency on both sides of category boundary)? Are differences preserved long enough to be usefully combined with subsequent input? Perhaps a more sensitive measure….

14 250 Hz realtime stream of eye positions. Parsed into Saccades, Fixations, Blinks, etc… Head movement compensation. Output in ~screen coordinates. Head-Tracker Cam Monitor IR Headtracker Emitters Eyetracker Computer Subject Computer Computers connected via Ethernet 2 Eye cameras HeadEye-TrackingEye-Tracking

15 Eye-TrackingEye-Tracking Fixations to object in response to spoken instructions: are time locked to incoming information (Tanenhaus, Spivey-Knowlton, Ebehart and Sedivy, 1995) can be easily mapped onto lexical activation from models like TRACE (Allopenna, Magnuson and Tanenhaus, 1998) show effects of non-displayed competitors (Dahan, Magnuson, Tanenhaus & Hogen) provide a glimpse at how activation for competitors unfolds in parallel over time.

16 Experiment 1 Lexical Identification “too bee” Experiment 1 Lexical Identification “too bee” Can we use eye-tracking methodologies to find evidence for graded perception of VOT?

17 Experiment 1: Lexical Identification Six 9-step /ba/ - /pa/ VOT continuum (0-40ms) Bear/PearBeach/Peach Butter/PutterBale/Pale Bump/PumpBomb/Palm 12 L- and Sh- Filler items LeafLampLadderLock LipLegSharkShip ShirtShoeShellSheep Identification indicated by mouse click on picture Eye movements monitored at 250 hz 17 Subjects

18 A moment to view the items Experiment 1: Lexical Identification

19 500 ms later Experiment 1: Lexical Identification

20 Bear

21 Experiment 1: Identification Results By subject:17.25 +/- 1.33ms By item:17.24 +/- 1.24ms 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT (ms) proportion /p/ BP High agreement across subjects and items for category boundary

22 Analysis of fixations 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT (ms) proportion /p/ BP Actual Exp2 Data Trials with low- frequency response excluded. ID Function after filtering Yields a “perfect” categorization function.

23 + Target = bug Competitor = bus Unrelated = cat, fish Time 200 ms 1234512345 Trials Analysis of fixations

24 Experiment 1: Eye Movement Results 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0400800120016000400800120016002000 Time (ms) More looks to competitor than unrelated items VOT=0 Response= VOT=40 Response= Fixation proportion

25 e.g. Given that the subject heard bomb clicked on “bomb”… time Fixation proportion target competitor How often was the Subject looking at the “palm”? time Fixation proportion target competitor Categorical Results Gradient Effect Analysis of fixations Gradient “competitor” effects

26 040080012001600 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 ms 5 ms 10 ms 15 ms VOT 0400800120016002000 20 ms 25 ms 30 ms 35 ms 40 ms VOT Fixation proportion Time since word onset (ms) Experiment 1: Eye Movement Results Smaller effect on the amplitude of activation—more effect on the duration: Competitors stay active longer as VOT approaches the category boundary. Response= Gradient competitor effects of VOT?

27 Experiment 1: Gradiency? 0510152025303540 0.02 0.03 0.04 0.05 0.06 0.07 0.08 VOT (ms) Fixation proportion Looks to Gradient Sensitivity “Categorical” Perception Andruski et al (schematic)

28 0510152025303540 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Experiment 1: Eye Movement Results VOT (ms) Fixation proportion Category Boundary B: p=.017* P: p<.0001*** Clear effects of VOT Response= Looks to Linear TrendB: p=.023*P: p=.002**

29 0510152025303540 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Experiment 1: Eye Movement Results VOT (ms) Fixation proportion Category Boundary Response= Looks to B: p=.017* P: p<.0001*** Clear effects of VOT Unambiguous Stimuli Only Linear TrendB: p=.023*P: p=.002**

30 Experiment 1: Results and Conclusions Subphonemic acoustic differences in VOT affect lexical activation. Gradient effect of VOT on looks to the competitor. Effect seems to be long-lasting (we’ll get back to that). Effect holds even for unambiguous stimuli. Conservative Test Filter out “incorrect” responses. Use unambiguous stimuli only.

31 Why was it so hard to find evidence for gradiency in CP tasks? However… * Steep identification function consistently replicated. What aspects of the task affect our ability to see gradient sensitivity? Phoneme ID vs. Lexical ID? Number of Alternatives? Type of Stimuli? Sensitivity of response measure

32 Experiment 2 Categorical Perception 2 /b/, not “too bee” Experiment 2 Categorical Perception 2 /b/, not “too bee” What can the eye-tracking paradigm reveal about ordinary phoneme identification experiments?

33 Replicates “classic” task: 9-step /ba/ - /pa/ VOT continuum (0-40ms) 2AFC Identification indicated by mouse click. Eye movements monitored at 250 hz. 17 Subjects Experiment 2: Categorical Perception

34 12 BP Ba 3 Experiment 2: Categorical Perception

35 Experiment 2: Identification Results 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT (ms) proportion /p/ Exp 2: BP Exp 1: Words Phoneme ID function steeper BP:17.5 +/-.83ms Words subject :17.25 +/-1.33ms Words item :17.24 +/- 1.24ms Boundaries Category boundaries are the same. BP

36 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT (ms) Proportion of /p/ response Experiment 2: Data Analysis Trials with low-frequency response excluded. Effectively yields a “perfect” categorization function. Actual ID Function Effective ID Function

37 Time (ms) 040080012001600 0 0.05 0.1 0.15 0.2 0.25 0.3 0400800120016002000 0 ms 5 ms 10 ms 15 ms VOT 20 ms 25 ms VOT 30 ms 35 ms 40 ms Some hints of gradiency for /p/. Even less for /b/. Difference between stimuli near boundary and endpoints. Perhaps more for /p/. Experiment 2: Eye movement data Fixation proportion Response = B Response = P Looks to P Looks to B

38 Experiment 2: Eye movement data /b/:p =.044* p trend =.055 /p/:p<.001***p trend =.005*** Could be driven by differences near category boundary. 0510152025303540 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 VOT (ms) Category Boundary Response=B Looks to P Response=B Looks to P Fixation proportion

39 Experiment 2: Eye movement data Unambiguous Stimuli Only /b/:p =.884p trend =.678 /p/:p =.013*p trend =.003*** 0510152025303540 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 VOT (ms) Category Boundary Response=B Looks to P Response=B Looks to P Fixation proportion

40 Experiment 2: Results and Conclusions Very steep slope for mouse response curves. consistent with traditional results Identical category boundary to experiment 1 validates stimuli Small difference between stimuli near category boundary and others. similar to Pisoni & Tash, Andruski, et al. Gradient effect weak for /ba/, moderate for /pa/

41 Experiment 3 Number of Response Alternatives Not 2 but /b/? Experiment 3 Number of Response Alternatives Not 2 but /b/? compare to experiment 2 (BaPa)

42 Experiment 3: BaPaLaSha Given the strong evidence for gradiency in Experiment 1 and the weaker evidence in Experiment 2, what is the effect of number of response alternatives? Same 9-step /ba/ - /pa/ VOT continuum (0-40ms) as experiment 2. La and Sha filler items added. 4AFC Identification indicated by mouse click. Button locations randomized between subjects. Eye movements monitored at 250 hz. 17 Subjects

43 Experiment 3: BaPaLaSha P BSh L La

44 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT proportion /p/ response Experiment 3: Identification Results Exp. 2 (BaPa) Exp. 1 (words) Exp. 3 (BaPaLaSha) Number of response alternatives accounts for some of the difference in slope.

45 Effective ID Function 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 VOT (ms) Proportion of /p/ response Experiment 3: Data Analysis Trials with low-frequency response excluded. Effectively yields a “perfect” categorization function. Actual ID Function

46 More looks to competitor than unrelated stimuli (p<.001). Eye movements in “phoneme ID” tasks are sensitive to acoustic similarity. Experiment 3: Eye movement data 0400800120016000400800120016002000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time (ms) VOT=0 Response=b VOT=40 Response=P Fixation proportion B P UR

47 Time (ms) 0 ms 5 ms 10 ms 15 ms VOT Difference between stimuli near boundary and endpoints Experiment 3: Eye movement data Fixation proportion 040080012001600 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0400800120016002000 20 ms 25 ms VOT 30 ms 35 ms 40 ms Response = B Response = P Looks to P Looks to B

48 Experiment 3: Eye movement data Close but no star: Nothing reaches significance /b/:p=.055 p trend =.068 /p/:p=.510 p trend =.199 0510152025303540 VOT (ms) Category Boundary Response=B Looks to P Response=B Looks to P Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

49 Experiment 3: Eye movement data Unambiguous Stimuli Only: even worse /b/:p=.374 p trend =.419 /p/:p=.356 p trend =.151 0510152025303540 VOT (ms) Category Boundary Response=B Looks to P Response=B Looks to P Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

50 Eye movements in phoneme ID tasks are sensitive to acoustic similarity between target and competitor. Number of alternatives explains some of differences in ID function. VERY weak subphonemic effects on lexical activation. Experiment 3: Results

51 Experiment 4 Response Type “too” /b/ Experiment 4 Response Type “too” /b/ Is there a difference between phoneme and lexical identification tasks? compare to experiment 1 (words)

52 Experiment 4: Response Type Same 6 VOT continua (0-40ms) as experiment 1 beach/peachbear/pearbomb/palm bale/pailbump/pumpbutter/putter Same 12 L- and Sh- filler items. 4AFC phoneme identification indicated by mouse click. Button locations randomized between subjects. Eye movements monitored at 250 hz. 17 Subjects

53 Experiment 4: Response Type P BSh L Ship

54 VOT proportion /p/ response Experiment 4: Identification Results 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Exp. 2 (BaPa) Exp. 1 (words) Exp. 4 (Response Type) Similar category boundary and slope to Exp 1 Exp 1: 17.25 +/- 1.33ms Exp 4: 16.34 +/- 1.52ms

55 040080012001600 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0400800120016002000 0 ms 5 ms 10 ms 15 ms VOT 20 ms 25 ms VOT 30 ms 35 ms 40 ms Time (ms) Fixation proportion Experiment 4: Eye movement data Small differences in the right direction Response = B Response = P Looks to P Looks to B

56 Experiment 4: Eye movement data Gradient effects using the whole range of stimuli /b/:p<.001 p trend =.002 /p/:p=.001 p trend =.031 0510152025303540 VOT (ms) Category Boundary Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Response=B Looks to B Response=P Looks to B

57 Experiment 4: Eye movement data Marginal effects using “unambiguous” stimuli only. /b/:p=.074 p trend =.074 /p/:p=.137p trend =.108 0510152025303540 VOT (ms) Category Boundary Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Response=B Looks to B Response=P Looks to B

58 Weaker subphonemic effect suggests that offline “metalinguistic” tasks are less sensitive to fine-grained phonetic detail than online tasks. Some detail is preserved in these tasks (at least with word stimuli)… Experiment 4: Results

59 Experiment 5 2AFC Words 2 “bee” Experiment 5 2AFC Words 2 “bee” Bringing it all together

60 Experiment 5: 2-Words Is the difference in ID curve slopes purely the result of number of response alternatives or does task play a role? Same 6 VOT continua (0-40ms) as experiment 1 beach/peachbear/pearbomb/palm bale/pailbump/pumpbutter/putter 0 filler items. 2AFC phoneme identification indicated by mouse click. Eye movements monitored at 250 hz.

61 Pear Experiment 5: Task

62 VOT proportion /p/ response Experiment 5: Identification Results 0510152025303540 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Exp. 2 (BaPa) Exp. 1 (words) Exp. 5 (2-words) Similar category boundary and slope to Exp 1 Exp 1: 17.25 +/- 1.33ms Exp 5: 16.18 +/- 1.74ms BP

63 0 ms 5 ms 10 ms 15 ms VOT 20 ms 25 ms VOT 30 ms 35 ms 40 ms Time (ms) Fixation proportion Experiment 5: Eye movement data 040080012001600 0400800120016002000 0 0.05 0.1 0.15 0.2 0.25 0.3 0400800120016002000 Response= Clean, but small, gradient effects for /p/ Effects for /b/ near the category boundary.

64 Experiment 5: Eye movement data Gradient effects using the whole range of stimuli /b/:p<.001 p trend =.005 /p/:p=.017 p trend =.026 0510152025303540 VOT (ms) Category Boundary Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Response= Looks to Response= Looks to

65 Experiment 5: Eye movement data Weaker effects using the “prototypical” range /b/:p<.443 p trend =.802 /p/:p=.044* p trend =.052 0510152025303540 VOT (ms) Category Boundary Fixation proportion 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Response= Looks to Response= Looks to

66 Shallow ID curve slope suggests 2AFC alone is not enough to create steep slope: 2AFC- phoneme ID is needed. Weaker gradient effects: fixed response locations and no filler items make this task more “explicit”? Experiment 5: Results

67 Trying to make sense out of it all… Being and Nothingness? Trying to make sense out of it all… Being and Nothingness?

68 Slope of ID Function Exp 1 (words) Exp 2 (BaPa) Exp 3 (BaPaLaSha) Exp 4 Exp 5 (2 Words) 0 1 2 Slope BP > BaPaLaSha > all others (p<.05) Words ~= Exp 4 ~= 2 Words (p>.1) 2AFC results in less sensitivity (in ID function) than 4AFC for non-word stimuli.

69 Gradient Effect across experiments 1 Words 2 BP 3 BPLS 4 Phoneme ID 5 2 words B ? P X B XX?X P XX All stimuli Without stimuli near c.b.

70 Pooled eye movement data 0510152025303540 VOT (ms) Fixation proportion Response=B Looks to P Response=P Looks to B 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Category Boundary Overall B: p vot.15 P: p vot.2

71 0510152025303540 VOT (ms) Fixation proportion Response=B Looks to P Response=P Looks to B 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Category Boundary Without stimuli near category boundary B: p vot =.005p trend.1 P: p vot.2 Pooled eye movement data

72 Conclusions on Task Manipulations Phoneme ID tasks with non-words yield the sharpest categorization functions—may mask subphonemic sensitivity. Even within these tasks, the number of response alternatives makes a big difference. Identification Functions

73 “Natural”, 4AFC lexical identification provides cleanest evidence for gradiency (measured by fixations to the competitor) for both /p/ and /b/ halves of continuum. Conclusions on Task Manipulations Competitor Effects (eye-movements) All experiments offer evidence of subphonemic sensitivity when we include stimuli near the category boundary. Eye-movements provide much more sensitive measure for assessing the role of fine-grained phonetic detail.

74 Most experiments showed weak evidence for gradient effect, but larger effects for /p/ than /b/. Conclusions on Task Manipulations Competitor Effects (eye-movements) Differences in the variance of the distribution of /b/ and /p/ in the learning environment? (Lisker & Abramson, Gerken & Maye) Auditory locus? Double peaked firing in auditory cortex differs shows more VOT sensitivity to voiceless than voiced stops. (Steinshneider et al; Sharma et al) No one factor seems to account for presence or absence of gradient effect.

75 Targets and Competitors, Gradient Effects and Temporal Dynamics and a return to experiment 1

76 Targets and Competitors Why look at exclusively at the competitor? Do subphonemic differences affect activation of the target? Andruski et al suggests it does.

77 Experiment 1: Target Activation 040080012001600 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0400800120016002000 0 ms 5 ms 10 ms 15 ms VOT 20 ms 25 ms VOT 30 ms 35 ms 40 ms Response= Time (ms) Fixation proportion Target effects much weaker, even in Experiment 1 May be limited to range near category boundary.

78 Experiment 1: Target Activation 0510152025303540 VOT (ms) Category Boundary Fixation proportion Response= Looks to Response= Looks to 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Overall B: p vot =.035p trend =.103 P: p vot <.001p trend <.010

79 Experiment 1: Target Activation 0510152025303540 VOT (ms) Category Boundary Fixation proportion Response= Looks to Response= Looks to 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Unambiguous Stimuli Only B: p vot =.44p trend =.98 P: p vot =.33p trend =.22

80 Target Activation: Conclusions Target sensitivity to subphonemic differences is carried by differences between ambiguous and prototypical stimuli. Consistent with previous research Andruski et al: 2/3 voicing is close to ambiguous region (~27 ms). Pisoni & Tash: increased RT near boundary.

81 Target Activation: Conclusions Gradient sensitivity to subphonemic differences is stronger in competitor activation that target activation. Consistent with Misuirski, Blumstein, Rissman and Berman (in press) This makes sense: Degrading target activation isn’t likely to be helpful in word recognition. Augmenting competitor activation could be very helpful.

82 Gradiency and Time Phonetic context in speech perception isn’t simultaneous. Rate information (vowel length) arrives after consonant. Coarticulation occurs across multiple segments. Lexical information has a large scope than phonetic information. Simply tracking graded acoustic features is not enough. Graded activation of lexical or sublexical units must persist over time to be integrated.

83 Temporal ambiguity resolution The lexical/phonetic identity of a segment can be determined by acoustic features that arrive after the segment in question. pb rown The ambiguous first consonant of is clearly a /b/ after hearing ”rown” Thus, like in higher level language comprehension, temporal ambiguity resolution is an important issue.

84 Temporal ambiguity resolution Lexical/Phonetic Temporal Ambiguity can be caused by Vowel length (cue to speaking rate and stress) Lexical/Statistical effects Embedded words Subphonemic sensitivity can minimize or eliminate the effects of temporary phonetic ambiguity by Storing how ambiguous a segment is Keeping competitors active until resolution occurs.

85 Experiment 1: Effect of Time? How long does the gradient sensitivity to VOT remain? Need to examine: the effect of time on competitor fixations interaction with VOT

86 040080012001600 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 ms 5 ms 10 ms 15 ms VOT 0400800120016002000 20 ms 25 ms 30 ms 35 ms 40 ms VOT Fixation proportion Time since word onset (ms) Experiment 1: Effect of time? Time course data suggests that gradiency is sticking around at least 1600 milliseconds after syllable onset. Response=

87 Experiment 1: Effect of Time? Analysis: earlylate Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Trial 6 Trial 7 Trial 8 Randomly sorted trials into two groups (early and late). Early Late For each group, fixations from only 1 time-bin were used Early: 300-1100ms Late: 1100-1900ms Ensures independence of data in each time-bin (since each trial only contributes to one)

88 Experiment 1: VOT x Time 0510152025303540 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 VOT (ms) Category Boundary Late (1100-1900ms) Early (300-1100ms) Main effect of time/b/: p=.001***/p/: p=.0001**** Response= Looks to Fixation proportion Main effect of VOT/b/: p=.015*/p/: p=.001*** Linear Trend for VOT /b/: p=.022*/p/: p=.009** No Interactionp>.1 Looks to

89 Experiment 1: VOT x Time 0510152025303540 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 VOT (ms) Late (1100-1900ms) Early (300-1100ms) Response= Looks to Fixation proportion Category Boundary Main effect of time/b/: p=.001***/p/: p=.0001**** Main effect of VOT/b/: p=.006**/p/: p=.013* Linear Trend for VOT /b/: p=.0012**/p/: p=.02** No Interactionp>.1

90 Finally some conclusions Lexical activation exhibits gradient effects of subphonemic (VOT) variation. Effect is robust and long-lasting—could potentially be very helpful for resolving temporal ambiguity and integrating information over time. Effect of subphonemic variation is stronger for competitors than targets.

91 Finally some conclusions Experimental task is crucial to see sensitivity: more responses + less metalinguistic = more gradiency. ID Functions influenced by type of stimuli (e.g. words/nonwords) as well as number of response alternatives. Realistic tasks = more gradient ID functions.

92 Finally some conclusions Subphonemic variation in VOT is not discarded It is not but signal.


Download ppt "Two /b/ or not “too bee”: Gradient sensitivity to subphonemic variation, categorical perception and the effect of task. Bob McMurray Michael K. Tanenhaus."

Similar presentations


Ads by Google