Neural basis of Perceptual Learning Vikranth B. Rao University of Rochester Rochester, NY.

Slides:



Advertisements
Similar presentations
Selective Visual Attention: Feature Integration Theory PS2009/10 Lecture 9.
Advertisements

What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Neural population code for fine perceptual decisions in area MT Gopathy Purushothaman m M David C Bradley Image from: PLoS Journal Club # 4 September 28.
V1 Physiology. Questions Hierarchies of RFs and visual areas Is prediction equal to understanding? Is predicting the mean responses enough? General versus.
Visual Saliency: the signal from V1 to capture attention Li Zhaoping Head, Laboratory of natural intelligence Department of Psychology University College.
optic nerve Striate Cortex (V1) Hubel & Wiesel 1 deg.
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
Institute for Theoretical Physics and Mathematics Tehran January, 2006 Value based decision making: behavior and theory.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
Neuronal Coding in the Retina and Fixational Eye Movements Christian Mendl, Tim Gollisch Max Planck Institute of Neurobiology, Junior Research Group Visual.
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Neuromorphic Engineering
Lecture 16 Spiking neural networks
Scaling Laws in Cognitive Science Christopher Kello Cognitive and Information Sciences Thanks to NSF, DARPA, and the Keck Foundation.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
Spike Train decoding Summary Decoding of stimulus from response –Two choice case Discrimination ROC curves –Population decoding MAP and ML estimators.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Biological Modeling of Neural Networks: Week 15 – Population Dynamics: The Integral –Equation Approach Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1.
IE 594 : Research Methodology – Discrete Event Simulation David S. Kim Spring 2009.
How do neurons deal with uncertainty?
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Population Coding Alexandre Pouget Okinawa Computational Neuroscience Course Okinawa, Japan November 2004.
Neural Key Exchange Presented by: Jessica Lowell 10 December 2009 CS 6750.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Lecture 11: Networks II: conductance-based synapses, visual cortical hypercolumn model References: Hertz, Lerchner, Ahmadi, q-bio.NC/ [Erice lectures]
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
Studies of Information Coding in the Auditory Nerve Laurel H. Carney Syracuse University Institute for Sensory Research Departments of Biomedical & Chemical.
Lecture 9: Introduction to Neural Networks Refs: Dayan & Abbott, Ch 7 (Gerstner and Kistler, Chs 6, 7) D Amit & N Brunel, Cerebral Cortex 7, (1997)
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
”When spikes do matter: speed and plasticity” Thomas Trappenberg 1.Generation of spikes 2.Hodgkin-Huxley equation 3.Beyond HH (Wilson model) 4.Compartmental.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Neural Modeling - Fall NEURAL TRANSFORMATION Strategy to discover the Brain Functionality Biomedical engineering Group School of Electrical Engineering.
BCS547 Neural Decoding.
EMPATH: A Neural Network that Categorizes Facial Expressions Matthew N. Dailey and Garrison W. Cottrell University of California, San Diego Curtis Padgett.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
Chapter 3. Stochastic Dynamics in the Brain and Probabilistic Decision-Making in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning.
It’s raining outside; want to go to the pub? It’s dry outside; want to go to the pub? Sure; I’ll grab the umbrella. What, are you insane? I’ll grab the.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Biological Modeling of Neural Networks: Week 15 – Fast Transients and Rate models Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1 Review Populations.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
MURI High-Level Control Biomimetic Robots - ONR Site Visit - August 9, Fabrication MURI Low-Level Control High-Level Control What strategies are.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
The neurology of attention How does the brain handle attention? What brain structures are involved?
Activity Recognition Journal Club “Neural Mechanisms for the Recognition of Biological Movements” Martin Giese, Tomaso Poggio (Nature Neuroscience Review,
Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
Biological Modeling of Neural Networks Week 11 – Variability and Noise: Autocorrelation Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Variation of.
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Mechanisms of Simple Perceptual Decision Making Processes
Dynamical Models of Decision Making Optimality, human performance, and principles of neural information processing Jay McClelland Department of Psychology.
Dynamical Models of Decision Making Optimality, human performance, and principles of neural information processing Jay McClelland Department of Psychology.
Probabilistic Population Codes for Bayesian Decision Making
The Neural Basis of Perceptual Learning
Confidence as Bayesian Probability: From Neural Origins to Behavior
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Volume 74, Issue 1, Pages (April 2012)
Presentation transcript:

Neural basis of Perceptual Learning Vikranth B. Rao University of Rochester Rochester, NY

Research Group Alexandre Pouget Jeff Beck Wei-ji Ma

Perceptual Learning in Orientation Discrimination ► Orientation discrimination is subject to learning. ► Perceptual Learning (PL) is one such form of learning.  Repeated exposure leads to decrease in discrimination thresholds (Gilbert 1994).

Central Question ► Perceptual learning is a robust phenomenon in a wide variety of perceptual tasks. ► When applied to orientation discrimination, how do we relate the learned improvement in behavioral performance, to changes in population activity due to learning at the network level? ► This is the question we aim to answer.

Approach ► We assume behavioral improvements are due to information increases in sensory representations.  (Paradiso 1998, Geisler 1989, Pouget and Thorpe 1991, Seung and Sompolisky 1993, Lee et al. 1999, Schoups et al Adini et al. 2002, Teich and Qian 2003). ► By information, we mean Fisher Information  It clearly relates to discrimination thresholds  It can be directly computed from first and second-order statistics (mean and variance).  It can be computed for a population of neurons.

Fisher Information ► By information, we mean the information about the stimulus feature (orientation θ), in a pop. of neurons. ► Response of one neuron in the pop. can be written as: ► The Fisher Information for this neuron is: ► For a population of neurons with independent noise: Orientation (deg) Activity (Seung and Sompolinsky, 1993)

Problems ► We know that neurons are not independent. ► Mechanisms which…  Change tuning curves may also change the correlation structure  Change correlation structure may also change tuning curves  Change cross-correlations but not single-neuron statistics can increase information drastically (Series et. al. 2004)

Investigative Approach ► We want to use networks of biologically plausible spiking neurons with realistic correlated noise to study the neural basis of PL. ► Therefore, we consider:  Two spiking neuron network models: ► Linear Non-Linear Poisson (LNP) neurons – analytically tractable but less biologically realistic ► Conductance-based integrate and fire (CBIF) neurons – biologically very realistic but analytically intractable  Biologically plausible connectivity  Biologically plausible single-neuron statistics (near unit Fano factor)  Enough simulations to produce a reasonable lower bound on Fisher information

Exploring candidate mechanism(s) for PL ► We want to investigate changes in Fisher Information as a result of the following manipulations to network dynamics:  Sharpening ► Via feed-forward connectivity ► Via recurrent connectivity  Amplification ► Via feed-forward connections ► Via recurrent connections  Increasing the number of neurons ► We use the analytically tractable LNP network to generate predictions and the CBIF network to confirm these predictions

Sharpening – LNP Simulations Orientation (deg) Activity spikes/s Orientation (deg) Activity spikes/s

Activity spikes/s Orientation (deg) Results - Sharpening ► Sharpening by adjusting feed-forward thalamocortical connections Activity spikes/s Orientation (deg) Log (variance) Log (mean)

Results - Sharpening ► Sharpening by adjusting recurrent lateral connections Orientation (deg) Activity spikes/s Orientation (deg) Activity spikes/s Log (variance) Log (mean)

Comparing sharpening schemes

Future Work ► Exploring changes in Fisher information as a result of:  Amplification  Increasing the number of neurons ► Exploring other ways of increasing information ► Exploring Early versus Late theories of Visual Learning

Conclusion ► We are interested in investigating the changes at the population level, that sub-serve the improvement in behavioral performance seen in PL. ► We follow the prevalent view that improvement in behavioral performance is due to information increase in the population code. ► Relaxing the independence assumption no longer allows us to relate changes at the single-cell level to changes at the population level, in terms of information throughput. ► An exploration of the mechanism of sharpening at the population level, using networks of spiking neurons with realistic correlated noise, yields the following results:  Sharpening through an increase in feed-forward connections leads to an increase in information throughput  Sharpening by changing the recurrent lateral connections leads to a decrease in information throughput