Vision Sciences Society Annual Meeting 2012 Daniel Mann, Charles Chubb

Slides:



Advertisements
Similar presentations
Cross-modal perception of motion- based visual-haptic stimuli Ian Oakley & Sile OModhrain Palpable Machines Research Group
Advertisements

Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Introduction Relative weights can be estimated by fitting a linear model using responses from individual trials: where g is the linking function. Relative.
Change Detection in Dynamic Environments Mark Steyvers Scott Brown UC Irvine This work is supported by a grant from the US Air Force Office of Scientific.
Computer vision: models, learning and inference
Prospects for the detection of sterile neutrinos with KATRIN-like experiments Anna Sejersen Riis, Steen Hannestad “Detecting sterile neutrinos with KATRIN.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Evaluating Perceptual Cue Reliabilities Robert Jacobs Department of Brain and Cognitive Sciences University of Rochester.
Autonomous Robots Vision © Manfred Huber 2014.
Result 1: Effect of List Length Result 2: Effect of Probe Position Prediction by perceptual similarity Prediction by physical similarity Subject
Tonal Violations Interact with Lexical Processing: Evidence from Cross-modal Priming Meagan E. Curtis 1 and Jamshed J. Bharucha 2 1 Dept. of Psych. & Brain.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
1 Tom Edgar’s Contribution to Model Reduction as an introduction to Global Sensitivity Analysis Procedure Accounting for Effect of Available Experimental.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Acceleration is detected by comparing initial and final velocities
P.Astone, S.D’Antonio, S.Frasca, C.Palomba
Optimization of Monte Carlo Integration
Thomas Andrillon, Sid Kouider, Trevor Agus, Daniel Pressnitzer 
Attention Components and Creative Potential: An ERP Exploration
PERCEIVED BEAUTY OF RANDOM DENSITY PATTERNS
From: Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated.
From: Rat performance on visual detection task modeled with divisive normalization and adaptive decision thresholds Journal of Vision. 2011;11(9):1. doi: /
Piercing of Consciousness as a Threshold-Crossing Operation
Volume 64, Issue 3, Pages (November 2009)
Contribution of spatial and temporal integration in heading perception
Volume 67, Issue 2, Pages (July 2010)
Within a Mixed-Frequency Visual Environment
Attention Narrows Position Tuning of Population Responses in V1
Mackenzie Weygandt Mathis, Alexander Mathis, Naoshige Uchida  Neuron 
Volume 62, Issue 1, Pages (April 2009)
Michael S Beauchamp, Kathryn E Lee, Brenna D Argall, Alex Martin 
Perceptual Echoes at 10 Hz in the Human Brain
Thomas Andrillon, Sid Kouider, Trevor Agus, Daniel Pressnitzer 
A Hierarchical Bayesian Look at Some Debates in Category Learning
Choice Certainty Is Informed by Both Evidence and Decision Time
Sergei Gepshtein, Martin S. Banks  Current Biology 
Volume 66, Issue 6, Pages (June 2010)
Selective Attention in an Insect Visual Neuron
Volume 62, Issue 1, Pages (April 2009)
Volume 93, Issue 2, Pages (January 2017)
Volume 79, Issue 4, Pages (August 2013)
C. Shawn Green, Alexandre Pouget, Daphne Bavelier  Current Biology 
A Role for the Superior Colliculus in Decision Criteria
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Volume 27, Issue 19, Pages e2 (October 2017)
Dynamic Coding for Cognitive Control in Prefrontal Cortex
A Dedicated Binding Mechanism for the Visual Control of Movement
Human Orbitofrontal Cortex Represents a Cognitive Map of State Space
Michael S Beauchamp, Kathryn E Lee, Brenna D Argall, Alex Martin 
Integration Trumps Selection in Object Recognition
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
The Ventriloquist Effect Results from Near-Optimal Bimodal Integration
Stephen V. David, Benjamin Y. Hayden, James A. Mazer, Jack L. Gallant 
A, Multivariate glm analysis for the aggregate observer (for the interval range within –450 and 250 ms from action execution). A, Multivariate glm analysis.
The Normalization Model of Attention
Receptive Fields of Disparity-Tuned Simple Cells in Macaque V1
Vahe Poghosyan, Andreas A. Ioannides  Neuron 
Attention Reorients Periodically
by Kenneth W. Latimer, Jacob L. Yates, Miriam L. R
Multisensory Integration: Maintaining the Perception of Synchrony
Volume 23, Issue 21, Pages (November 2013)
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Sound Facilitates Visual Learning
Tuned Normalization Explains the Size of Attention Modulations
Christoph Kayser, Nikos K. Logothetis, Stefano Panzeri  Current Biology 
Valerio Mante, Vincent Bonin, Matteo Carandini  Neuron 
Metacognitive Failure as a Feature of Those Holding Radical Beliefs
Volume 28, Issue 19, Pages e8 (October 2018)
Presentation transcript:

The temporal resolution of binding brightness and loudness in dynamic random sequences Vision Sciences Society Annual Meeting 2012 Daniel Mann, Charles Chubb Department of Cognitive Sciences University of California, Irvine

Introduction In what ways can observers combine dynamic visual and auditory information (brightness & loudness)? Use a selective attention task to extract achievable attention filters for various brightness/loudness judgments Test the temporal limit of multimodal binding derived by Fujisaki & Nishida (2010) Briefly explain Nishida experiment. Goal 1: Test multiple attentional filters on brightness/loudness combinations and extract rules for the integration these cues Goal 2: Test the temporal limit of binding using dynamic random stimuli instead of the phase judgment stimuli used to define the limit (2-3Hz) by Fujisaki & Nishida (2010).

Background: Fujisaki & Nishida (2010) Phase judgment stimuli used to test cross modality binding temporal limit Result: Universal binding limit of 2-3Hz at 75% performance for all cross-modal stimuli tested

Methods: Stimuli Each stimulus was a quick randomly-ordered stream of 18 gray disks (83 ms per disk), each accompanied by a simultaneous burst of auditory white noise. Three levels of disk brightness and of noise loudness were used to produce 9 different types of audiovisual pairings (tokens). Sound presented in headphones. 0 to two because the lowest amplitude was actually blank/silent. Sound level measurements done with Bruel and Kjaer 2260 Investigator. A-weighted broadband detector with Fast time weighting. 5 bars of mac volume: 0 amplitude: ~33 dBSPL. Medium sound (.25) = 69 dBSPL. Loud sound (.999) = 82 dbSPL.

Conditions (Target Attention Filters) Conditions 1 and 2 were brightness-only and loudness-only attention condition judged with feedback whether the mean brightness (loudness) of the disks in the stimulus stream was higher versus lower than the usual. Condition 3: judge whether the stimulus stream contained a greater number of correlated (brightest/loudest plus dimmest/quietest) or anticorrelated (brightest/quietest plus dimmest/loudest) pairings The 9 token types using the three amplitude levels (0, 1, 2) for the auditory and visual component of each token. The five target filter matrices show the positive and negative weighting of each token depending on the experimental condition.

Sample Trials Embed video here. Mention feedback after each trial.

Basic Model In any given trial of a particular condition, we assume that the subject responds "1" (vs "-1" otherwise) if where k is the position of each token in the stimulus. Simple model for first 2 conds, binding params model for last 3 conds. In each condition, a probit model was used to measure the impact exerted on the observer’s judgments by each of the 9 types of pairings The criterion $Crit$ is selected by the participant to optimize performance. We assume that a standard normal random variable $Noise$ is added to the decision statistic.

Results: Attend to One Modality .55 .89 .52 .77 1.30 .65

Results: Attend to Both Modalities .39 .46 .38 .39 .65 .46 .44 .43

Principle Components Analysis Token weights scaled by sensitivity for each condition 3 dimensions of sensitivity given the conditions we tested Loudness (blue) Brightness (green) Correlatedness (red) Group PCA very similar to individual PCA. Cyan is the fourth dimension of the PCA to show that the other dimensions all wobbled around zero, not doing anything.

Elaborating the model: Binding Simultaneity Parameters Stimulus w(-2) w(2) w(1) w(-1) w(0) Explain subscripting.

Model with Binding Simultaneity Parameters In any given trial of a particular condition, we assume that the subject responds "1" (vs "-1" otherwise) if where where the A(k) is the auditory component and V(k) is the visual component of token k. f( A(k),V(k+ δ) ) is the token-type sensitivity for that token in that condition The Binding Simultaneity Parameters sum = 1 Markov Chain Monte Carlo simulation used to estimate the joint posterior density of the vector comprising the parameters used to fit the data

Results: Temporal Resolution

Temporal Resolution of Model Simulation Participant 1 Participant 2 Analogous Fujisaki experiment on our display revealed their result of 2-3Hz limit. This model was on data that operated on stimuli with low amplitudes instead of blank/silent. Our model simulation on phase judgment stimuli reveals higher binding temporal resolution than the Fujisaki & Nishida 2010 result (orange box).

Conclusions Observers can achieve a range of different audiovisual attention filters. There were 3 dimensions of attention sensitivity given our tasks. Dynamic random stimuli allowed multimodal binding at higher temporal acuity (3.8-5.25 Hz) than phase judgment stimuli (2-3 Hz) Suggests that auditory and visual information from randomly varying stimuli can be bound more precisely in time than strictly oscillating stimuli Brightness, loudness strongest PCs.

Acknowledgments Thanks for listening! The Chubb-Wright Lab at UC Irvine This work was funded by NSF Award BCS-0843897 Group photo of labmates? Thanks for listening!

Supplemental slides