Spontaneous activity in V1: a probabilistic framework

Slides:



Advertisements
Similar presentations
The Helmholtz Machine P Dayan, GE Hinton, RM Neal, RS Zemel
Advertisements

Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning.
Chapter 2.
Dynamic Causal Modelling for ERP/ERFs Valentina Doria Georg Kaegi Methods for Dummies 19/03/2008.
CS590M 2008 Fall: Paper Presentation
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
Institute for Theoretical Physics and Mathematics Tehran January, 2006 Value based decision making: behavior and theory.
Tiled Convolutional Neural Networks TICA Speedup Results on the CIFAR-10 dataset Motivation Pretraining with Topographic ICA References [1] Y. LeCun, L.
Yanxin Shi 1, Fan Guo 1, Wei Wu 2, Eric P. Xing 1 GIMscan: A New Statistical Method for Analyzing Whole-Genome Array CGH Data RECOMB 2007 Presentation.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
COGNITIVE NEUROSCIENCE
Information Theory and Learning
How to do backpropagation in a brain
Learning Multiplicative Interactions many slides from Hinton.
CSC2535: Computation in Neural Networks Lecture 11: Conditional Random Fields Geoffrey Hinton.
Neural Information in the Visual System By Paul Ruvolo Bryn Mawr College Fall 2012.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
(Infinitely) Deep Learning in Vision Max Welling (UCI) collaborators: Ian Porteous (UCI) Evgeniy Bart UCI/Caltech) Pietro Perona (Caltech)
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
FMRI Methods Lecture7 – Review: analyses & statistics.
How natural scenes might shape neural machinery for computing shape from texture? Qiaochu Li (Blaine) Advisor: Tai Sing Lee.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
On Natural Scenes Analysis, Sparsity and Coding Efficiency Redwood Center for Theoretical Neuroscience University of California, Berkeley Mind, Brain.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Image Stabilization by Bayesian Dynamics Yoram Burak Sloan-Swartz annual meeting, July 2009.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 19: Learning Restricted Boltzmann Machines Geoffrey Hinton.
Several strategies for simple cells to learn orientation and direction selectivity Michael Eisele & Kenneth D. Miller Columbia University.
NTU & MSRA Ming-Feng Tsai
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Dynamic Causal Model for evoked responses in MEG/EEG Rosalyn Moran.
Grenoble Images Parole Signal Automatique Modeling of visual cortical processing to estimate binocular disparity Introduction - The objective is to estimate.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
1 Nonlinear models for Natural Image Statistics Urs Köster & Aapo Hyvärinen University of Helsinki.
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Bayesian Framework Finding the best model Minimizing model complexity
HST 583 fMRI DATA ANALYSIS AND ACQUISITION
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Energy models and Deep Belief Networks
Computational Vision --- a window to our brain
Retinal Circuit and Processing
CSC321: Neural Networks Lecture 19: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
Dynamic Causal Modelling (DCM): Theory
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Nonlinear processing in LGN neurons
“What Not” Detectors Help the Brain See in Depth
Volume 56, Issue 2, Pages (October 2007)
Vision: In the Brain of the Beholder
Dynamic Causal Modelling for M/EEG
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Ralf M. Haefner, Pietro Berkes, József Fiser  Neuron 
Consequences of the Oculomotor Cycle for the Dynamics of Perception
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Data Analysis – Part1: The Initial Questions of the AFCS
Statistical Thinking and Applications
Consequences of the Oculomotor Cycle for the Dynamics of Perception
Gergő Orbán, Pietro Berkes, József Fiser, Máté Lengyel  Neuron 
Receptive Fields of Disparity-Tuned Simple Cells in Macaque V1
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
Outline Human Visual Information Processing – cont.
Volume 56, Issue 2, Pages (October 2007)
Decomposing the motor system
Edge Detection via Lateral Inhibition
Rony Azouz, Charles M. Gray  Neuron 
Presentation transcript:

Spontaneous activity in V1: a probabilistic framework Gergő Orbán Volen Center for Complex Systems Brandeis University Sloan Swartz Centers Annual Meeting, 2007

Normative account for visual representations Optimization criterion for the emergence of simple-cell receptive fields: independent ‘filters’ + sparseness (Bell & Sejnowski, 1995; Olshausen & Field, 1996)

Activity in V1 The spectrum of V1 physiology is much richer Spontaneous activity Response variabilty Temporal dynamics Can we devise a framework that Gives a functional description of visual processing Uses normative principles in probabilistic learning Gives a more complete interpretation of V1 activity?

Computational paradigm Density estimation Statistically well founded principle Allows the representation of uncertainty Efficient for making predictions Internal representation: Useful representation Biologically plausible : retinal image/ RGC output; : neural activity

Spontaneous activity In the awake brain there is patterned neural activity not directly related to the stimulus Evoked Spontaneous (Tsodyks et al, 1999) Patterns of neural activities are similar in stimulus evoked condition and closed eye condition Long-range correlations in neural activity (Fiser et al, 2004)

Probabilistic model: Field of experts Filters are componenets in a Boltzmann energy function (Osindero, Welling & Hinton, 2006) Sparse prior (Student-t distribution) Image model assuming translational invariance (Black & Roth, 2005) Learning: standard contrastive divergence & Hybrid MC (Hinton 2002) Receptive fields

Spontaneous activity as prior sampling Evoked activity: ANSATZ: Spontaneous activity: Evoked activity Natural image statistics Intuitive link between evoked and spontaneous activities

Images generated by the model Prior over activities Sampling Neural activities Filters Dreamed image Images generated from prior have long-range structure

Evoked and spontaneous neural activity Correlation between hidden units Experiment (Fiser et al, 2004) Evoked and spontaneous activities have similar correlational structure

Spontaneous neural activity before learning Experiment (Fiser et al, 2004) Correlational patterns in the activity of neurons is a result of learning in the probabilistic model

Conclusions The probabilistic framework provides a viable explanation for spontaneous activity in V1 Spontaneous activity as sampling from prior Long range correlations are present both in evoked and spontaneous activities The tendency of changes in spatial correlations with training match experimental results Sleep-wake

Bottom line In the probabilistic framework: Temporal dynamics top-down/ lateral interactions Spontaneous activity prior sampling Response variablity posterior variance

Special thanks to Pietro Berkes (Gatsby) Collaborators: Máté Lengyel (Gatsby) József Fiser (Brandeis)

High-level computational principles + physiology Computational paradigm: Normative probabilistic model Experimental paradigm: Spontaneous activity in V1 – prior sampling – posterior variance – top-down/ lateral interactions Are there sensible interpretations that assign functional roles for the spontaeous activity?