Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin

Slides:



Advertisements
Similar presentations
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Advertisements

INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Unsupervised Learning
Cue Reliabilities and Cue Combinations Robert Jacobs Department of Brain and Cognitive Sciences University of Rochester.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
Integration of sensory modalities
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
1 Testing the Efficiency of Sensory Coding with Optimal Stimulus Ensembles C. K. Machens, T. Gollisch, O. Kolesnikova, and A.V.M. Herz Presented by Tomoki.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
Effects of Viewing Geometry on Combination of Disparity and Texture Gradient Information Michael S. Landy Martin S. Banks James M. Hillis.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
CS 376b Introduction to Computer Vision 02 / 25 / 2008 Instructor: Michael Eckmann.
Combining Biased and Unbiased Estimators in High Dimensions Bill Strawderman Rutgers University (joint work with Ed Green, Rutgers University)
Bayesian Learning Rong Jin.
Computer vision: models, learning and inference
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
Thanks to Nir Friedman, HU
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Chapter Two Probability Distributions: Discrete Variables
Population Coding Alexandre Pouget Okinawa Computational Neuroscience Course Okinawa, Japan November 2004.
Dr. Richard Young Optronic Laboratories, Inc..  Uncertainty budgets are a growing requirement of measurements.  Multiple measurements are generally.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
INTRODUCTION TO Machine Learning 3rd Edition
BCS547 Neural Decoding.
Evaluating Perceptual Cue Reliabilities Robert Jacobs Department of Brain and Cognitive Sciences University of Rochester.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Machine Learning 5. Parametric Methods.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Optimal Eye Movement Strategies In Visual Search.
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Mihály Bányai, Vaibhav Diwadkar and Péter Érdi
Spatial Memory and Multisensory Perception in Children and Adults
Maximum Likelihood Estimation
The “Flash-Lag” Effect Occurs in Audition and Cross-Modally
More about Posterior Distributions
10701 / Machine Learning Today: - Cross validation,
Volume 21, Issue 19, Pages (October 2011)
Integration of sensory modalities
A Vestibular Sensation: Probabilistic Approaches to Spatial Perception
Volume 34, Issue 5, Pages (May 2002)
Sergei Gepshtein, Martin S. Banks  Current Biology 
A Switching Observer for Human Perceptual Estimation
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Learning Theory Reza Shadmehr
Parametric Methods Berlin Chen, 2005 References:
Volume 34, Issue 5, Pages (May 2002)
A Switching Observer for Human Perceptual Estimation
The Ventriloquist Effect Results from Near-Optimal Bimodal Integration
Bayesian vision Nisheeth 14th February 2019.
Why Seeing Is Believing: Merging Auditory and Visual Worlds
Timescales of Inference in Visual Adaptation
Mathematical Foundations of BME Reza Shadmehr
Mathematical Foundations of BME
Volume 92, Issue 2, Pages (October 2016)
Psychophysics results: psychometric functions, visual weights and audiovisual variances. Psychophysics results: psychometric functions, visual weights.
Volume 74, Issue 1, Pages (April 2012)
Applied Statistics and Probability for Engineers
Presentation transcript:

Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin Bayesian integration of visual and auditory signals for spatial localization Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin COGS 272, Spring 2010 Instructor: Prof. Angela Yu Presenter: Vikram Gupta

Outline Introduction Background Methods Procedure Results Discussion

Introduction Spatial Localization is Complex Integration of multiple sensory and motor signals Sensory: binaural time, phase, intensity difference Motor: orientation of the head

Introduction ∫ Inconsistent Spatial Cues Typically, we receive consistent spatial cues What if this is not true? Ex: Movie theater, television Visual capture Vision dominates over conflicting auditory cue. Ex: recalibration in juvenile owl Optimal?

Background Models for inconsistent cue integration Winner Take All (ex. vision capture) Dominant signal exclusively decides Blend information from sensory sources Is blending statistically optimal? Example: Maximum Likelihood Estimate Assumption independent sensory signals, normal dist.

Background MLE Example Impact of reliability on MLE estimate

MLE Model Is Normal distribution a good estimate of neural coding of sensory input? Does this integration always occur? Or are there qualifying conditions? Does it make sense to integrate if Lv* and La* are far apart? v and a are temporally separated?

Schematic of MLE Integration Ernst, 2006 (MLE integration for haptic and visual input

Experiment Vision capture or MLE match empirical data? Method summary: Noise is produced at 1 of 7 locations 1.50 apart Visual stimulus has noise at 5 levels 10%, 23%, 36%, 49%, 62% Single sensory modality trial (Audio / noisy Visual )  MLE parameters  predict performance for Audio + noisy Visual  compare with Empirical data

Experiment Single-modality Bimodal C Single-modality Standard stimuli followed by comparison Is C Left / Right of S? Bimodal Standard stimuli has Audio and Visual apart from center Audio and visual Comparison stimuli are co-located. Only 1 subject aware of spatial discrepancy in S

Results (1 subject) Cumulative normal distribution fits to data Mean and variance are used for MLE model Wv receives high value when visual noise is low Wa receives high value when visual noise is high

Results (MLE Estimate of sensory input) rt = 1 comparison to the right of standard pt = , probability of rt, given mean and variance R = set of responses to the independent trials Assuming normal distribution, MLE estimate of mean and variance parameters µml = 1/T * (∑ rt) σ2ml = 1/T * (rt - µml) 2

L* based on MLE estimates Mean is calculated according to above weighted average Variance is smaller than either P(L|v) or P(L|a)

L* based on MLE estimates MLE estimate for wv and wa are found by maximizing RHS of (3) and using (6) tau is scale parameter or slope

Results (bi-modal, same subject, all subjects) Standard stimulus Visual -1.50 Audio 1.50 Point of Subjective Equality -1.10 for low visual noise 0.10 for high noise Visual input dominates at low noise Equal weight at high noise

Empirical vs. MLE MLE estimates for visual weight are significantly lower than the empirical results. A Bayesian model with a prior that reduces variance in visual-only trials provides a good regression fit for the data.

Bayesian (MAP) Cue Integration For visual only trials, instead of using MLE for mean and variance, we multiply the RHS above with the probability of the occurrence of the normal distribution mean is assumed to have a uniform distribution. variance is assumed to have inverse gamma distribution with parameters biased for small variance.

Discussion Bayesian approach is a hybrid of MLE and visual capture models. How are variances encoded? How are priors encoded? How does temporal separation in cues impact sensory integration? Biological basis for Bayesian cue integration?