Download presentation
Presentation is loading. Please wait.
Published byAnis Morrison Modified over 6 years ago
1
The temporal resolution of binding brightness and loudness in dynamic random sequences
Vision Sciences Society Annual Meeting 2012 Daniel Mann, Charles Chubb Department of Cognitive Sciences University of California, Irvine
2
Introduction In what ways can observers combine dynamic visual and auditory information (brightness & loudness)? Use a selective attention task to extract achievable attention filters for various brightness/loudness judgments Test the temporal limit of multimodal binding derived by Fujisaki & Nishida (2010) Briefly explain Nishida experiment. Goal 1: Test multiple attentional filters on brightness/loudness combinations and extract rules for the integration these cues Goal 2: Test the temporal limit of binding using dynamic random stimuli instead of the phase judgment stimuli used to define the limit (2-3Hz) by Fujisaki & Nishida (2010).
3
Background: Fujisaki & Nishida (2010)
Phase judgment stimuli used to test cross modality binding temporal limit Result: Universal binding limit of 2-3Hz at 75% performance for all cross-modal stimuli tested
4
Methods: Stimuli Each stimulus was a quick randomly-ordered stream of 18 gray disks (83 ms per disk), each accompanied by a simultaneous burst of auditory white noise. Three levels of disk brightness and of noise loudness were used to produce 9 different types of audiovisual pairings (tokens). Sound presented in headphones. 0 to two because the lowest amplitude was actually blank/silent. Sound level measurements done with Bruel and Kjaer 2260 Investigator. A-weighted broadband detector with Fast time weighting. 5 bars of mac volume: 0 amplitude: ~33 dBSPL. Medium sound (.25) = 69 dBSPL. Loud sound (.999) = 82 dbSPL.
5
Conditions (Target Attention Filters)
Conditions 1 and 2 were brightness-only and loudness-only attention condition judged with feedback whether the mean brightness (loudness) of the disks in the stimulus stream was higher versus lower than the usual. Condition 3: judge whether the stimulus stream contained a greater number of correlated (brightest/loudest plus dimmest/quietest) or anticorrelated (brightest/quietest plus dimmest/loudest) pairings The 9 token types using the three amplitude levels (0, 1, 2) for the auditory and visual component of each token. The five target filter matrices show the positive and negative weighting of each token depending on the experimental condition.
6
Sample Trials Embed video here. Mention feedback after each trial.
7
Basic Model In any given trial of a particular condition, we assume that the subject responds "1" (vs "-1" otherwise) if where k is the position of each token in the stimulus. Simple model for first 2 conds, binding params model for last 3 conds. In each condition, a probit model was used to measure the impact exerted on the observer’s judgments by each of the 9 types of pairings The criterion $Crit$ is selected by the participant to optimize performance. We assume that a standard normal random variable $Noise$ is added to the decision statistic.
8
Results: Attend to One Modality
.55 .89 .52 .77 1.30 .65
9
Results: Attend to Both Modalities
.39 .46 .38 .39 .65 .46 .44 .43
10
Principle Components Analysis
Token weights scaled by sensitivity for each condition 3 dimensions of sensitivity given the conditions we tested Loudness (blue) Brightness (green) Correlatedness (red) Group PCA very similar to individual PCA. Cyan is the fourth dimension of the PCA to show that the other dimensions all wobbled around zero, not doing anything.
11
Elaborating the model: Binding Simultaneity Parameters
Stimulus w(-2) w(2) w(1) w(-1) w(0) Explain subscripting.
12
Model with Binding Simultaneity Parameters
In any given trial of a particular condition, we assume that the subject responds "1" (vs "-1" otherwise) if where where the A(k) is the auditory component and V(k) is the visual component of token k. f( A(k),V(k+ δ) ) is the token-type sensitivity for that token in that condition The Binding Simultaneity Parameters sum = 1 Markov Chain Monte Carlo simulation used to estimate the joint posterior density of the vector comprising the parameters used to fit the data
13
Results: Temporal Resolution
14
Temporal Resolution of Model Simulation
Participant 1 Participant 2 Analogous Fujisaki experiment on our display revealed their result of 2-3Hz limit. This model was on data that operated on stimuli with low amplitudes instead of blank/silent. Our model simulation on phase judgment stimuli reveals higher binding temporal resolution than the Fujisaki & Nishida 2010 result (orange box).
15
Conclusions Observers can achieve a range of different audiovisual attention filters. There were 3 dimensions of attention sensitivity given our tasks. Dynamic random stimuli allowed multimodal binding at higher temporal acuity ( Hz) than phase judgment stimuli (2-3 Hz) Suggests that auditory and visual information from randomly varying stimuli can be bound more precisely in time than strictly oscillating stimuli Brightness, loudness strongest PCs.
16
Acknowledgments Thanks for listening!
The Chubb-Wright Lab at UC Irvine This work was funded by NSF Award BCS Group photo of labmates? Thanks for listening!
17
Supplemental slides
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.