Bayesian vision Nisheeth 14th February 2019.

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Advertisements

Motion Illusions As Optimal Percepts. What’s Special About Perception? Arguably, visual perception is better optimized by evolution than other cognitive.
Bayesian Estimation in MARK
Likelihood Ratio, Wald, and Lagrange Multiplier (Score) Tests
Visual Recognition Tutorial
Introduction  Bayesian methods are becoming very important in the cognitive sciences  Bayesian statistics is a framework for doing inference, in a principled.
Statistical Methods Chichang Jou Tamkang University.
Decision making as a model 7. Visual perception as Bayesian decision making.
Presenting: Assaf Tzabari
Computer vision: models, learning and inference
Additional Slides on Bayesian Statistics for STA 101 Prof. Jerry Reiter Fall 2008.
Bayesian Model Selection in Factorial Designs Seminal work is by Box and Meyer Seminal work is by Box and Meyer Intuitive formulation and analytical approach,
沈致远. Test error(generalization error): the expected prediction error over an independent test sample Training error: the average loss over the training.
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Perceptual Multistability as Markov Chain Monte Carlo Inference.
Maximum Likelihood - "Frequentist" inference x 1,x 2,....,x n ~ iid N( ,  2 ) Joint pdf for the whole random sample Maximum likelihood estimates.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Simple examples of the Bayesian approach For proportions and means.
Ch15: Decision Theory & Bayesian Inference 15.1: INTRO: We are back to some theoretical statistics: 1.Decision Theory –Make decisions in the presence of.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Motion Illusions As Optimal Percepts. What’s Special About Perception? Visual perception important for survival  Likely optimized by evolution  at least.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Univariate Gaussian Case (Cont.)
Chapter 2: Bayesian hierarchical models in geographical genetics Manda Sayler.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Bayesian Estimation and Confidence Intervals Lecture XXII.
Crash course in probability theory and statistics – part 2 Machine Learning, Wed Apr 16, 2008.
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Univariate Gaussian Case (Cont.)
Bayesian inference in neural networks
Oliver Schulte Machine Learning 726
Bayesian Estimation and Confidence Intervals
Applied statistics Usman Roshan.
Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin
STATISTICAL INFERENCE
Bayesian approach to the binomial distribution with a discrete prior
Model Inference and Averaging
CHAPTER 11 Inference for Distributions of Categorical Data
12 Inferential Analysis.
Likelihood Ratio, Wald, and Lagrange Multiplier (Score) Tests
CSCI 5822 Probabilistic Models of Human and Machine Learning
OVERVIEW OF BAYESIAN INFERENCE: PART 1
Bayesian Model Comparison and Occam’s Razor
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CHAPTER 11 Inference for Distributions of Categorical Data
Example Human males have one X-chromosome and one Y-chromosome,
Hierarchical Models.
The free-energy principle: a rough guide to the brain? K Friston
Estimating mean abundance from repeated presence-absence surveys
12 Inferential Analysis.
CHAPTER 11 Inference for Distributions of Categorical Data
Confidence as Bayesian Probability: From Neural Origins to Behavior
A Switching Observer for Human Perceptual Estimation
LECTURE 09: BAYESIAN LEARNING
Bayes for Beginners Luca Chech and Jolanda Malamud
CHAPTER 11 Inference for Distributions of Categorical Data
A Switching Observer for Human Perceptual Estimation
CHAPTER 11 Inference for Distributions of Categorical Data
Human vision: function
CHAPTER 11 Inference for Distributions of Categorical Data
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions July chonbuk national university.
Timescales of Inference in Visual Adaptation
CHAPTER 11 Inference for Distributions of Categorical Data
CHAPTER 11 Inference for Distributions of Categorical Data
CHAPTER 11 Inference for Distributions of Categorical Data
CHAPTER 11 Inference for Distributions of Categorical Data
Presentation transcript:

Bayesian vision Nisheeth 14th February 2019

Problems for FIT accounts of vision Search is very fast in this situation only when the objects look 3D - can the direction a whole object points be a “feature”?

Asymmetries in visual search Vs Vs the presence of a “feature” is easier to find than the absence of a feature

Kristjansson & Tse (2001) Faster detection of presence than absence - but what is the “feature”?

Familiarity and asymmetry asymmetry for German but not Cyrillic readers

Gestalt effects

Pragnanz Perception is not just bottom up integration of features. There is more to a whole image than the sum of its parts.

Not understood computationally Principles are conceptually clear; DCNNs can learn them, but translation is missing https://arxiv.org/pdf/1709.06126.pdf

Concept-selective neurons https://www.newscientist.com/article/dn7567-why-your-brain-has-a-jennifer-aniston-cell/

Summary Classic accounts of visual perception have focused on bottom-up integration of features Consistent with data from visual search experiments Inconsistent with phenomenological experience in naturalistic settings Top down influences affect visual perception But how? We will see one promising modeling strategy today

Which curves are raised?

Which curves are raised?

http://www.psych.nyu.edu/maloney/MamassianLandyMaloney.MITPress2003.pdf

Vision uses more information than impacts the retina What you see isn’t exactly what you get

Perception as inference Want to know θ, get to see φ

Bayesian visual perception Observer constructs p(θ|φ) using p(φ|θ), physiologically determined likelihood p(θ), visual priors on percepts Bayesian revolution in perception Using Bayesian analysis to estimate visual priors using behavior responses

Bayesian prior estimation Regular sequential Bayes Update prior by multiplying with likelihood Obtain posterior Use posterior as prior for next time step Inverse procedure Know posterior distribution given all data Divide posterior by data likelihood sequentially Each obtained prior serves as posterior for earlier time step Finally left with original prior

Bayesian parameter estimation Typically hard to obtain likelihoods for multiple sequential updates in closed form Alternative procedure Generate posterior distributions using different parameters determining the prior Find parameters that yield posterior distributions that best fit the true distribution Frequently have to use some form of MCMC sampling

Demo String of binary observations modeled using binomial likelihood and beta prior Posterior update Prior estimation

Prior estimation

Applied to vision data Narrow score = proportion of times image is described with bulging narrow ridges http://www.psych.nyu.edu/maloney/MamassianLandyMaloney.MITPress2003.pdf

Simplifying assumptions Response proportions assumed to reflect posterior probability p(narrow|image) Prior biases assumed to influence response independently R Model illumination and viewpoint as drawn from a normal distribution on the respective angles

Findings Illumination angles are a priori assumed to be above and to the left Increasing shading contrast reduces the variance of the illumination prior Increasing contour contrast reduces the variance of the viewpoint prior In other experiments, shown that People think they are looking at objects from above People think angles are a priori likely to be right angles And many more

What does it mean? Bayesian models of visual perception give us a way of describing the priors that people use But they don’t explain how the priors come to be the way they are Positives: can describe both perceptual and cognitive priors Negatives: Hard to characterize the true provenance of empirically determined priors