Plasticity and learning Dayan and Abbot Chapter 8.

Slides:



Advertisements
Similar presentations
Computational Neuroscience 03 Lecture 8
Advertisements

Introduction to Neural Networks Computing
Perceptron Learning Rule
Chapter 2.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Part III: Models of synaptic plasticity BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters Laboratory.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Spike timing dependent plasticity Homeostatic regulation of synaptic plasticity.
Overview over different methods
The “Humpty Dumpty” problem
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Un Supervised Learning & Self Organizing Maps Learning From Examples
Critical periods A time period when environmental factors have especially strong influence in a particular behavior. –Language fluency –Birds- Are you.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Unsupervised learning
1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and.
Learning and Stability. Learning and Memory Ramón y Cajal, 19 th century.
Strong claim: Synaptic plasticity is the only game in town. Weak Claim: Synaptic plasticity is a game in town. Theoretical Neuroscience II: Learning, Perception.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
The BCM theory of synaptic plasticity.
Artificial Neural Network Unsupervised Learning
Critical periods in development - “nature” vs. “nurture”
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Unsupervised learning
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to It was.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Strong claim: Synaptic plasticity is the only game in town. Weak Claim: Synaptic plasticity is a game in town. Biophysics class: section III The synaptic.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Overview over different methods You are here !. Hebbian learning A B A B t When an axon of cell A excites cell B and repeatedly or persistently takes.
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Independent Component Analysis features of Color & Stereo images Authors: Patrik O. Hoyer Aapo Hyvarinen CIS 526: Neural Computation Presented by: Ajay.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Deep Feedforward Networks
Biointelligence Laboratory, Seoul National University
A principled way to principal components analysis
Volume 56, Issue 2, Pages (October 2007)
Presented by Rhee, Je-Keun
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Backpropagation.
The Naïve Bayes (NB) Classifier
7. Associators and synaptic plasticity
Experience-Dependent Asymmetric Shape of Hippocampal Receptive Fields
Introduction to Neural Network
Volume 56, Issue 2, Pages (October 2007)
Presentation transcript:

Plasticity and learning Dayan and Abbot Chapter 8

Introduction Learning occurs through synaptic plasticity Hebb (1949): If neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened –Stimulus response (Pavlov) –Converse: If neuron A does not contribute to the firing of B, the synapse is weakened –Hippocampus, neocortex, cerebellum

LTP and LTD at Shaffer collateral inputs to CA1 region of rat hippocampal slice. High stimulation yields LTP. Low stimulation yields LTD. NB: no stimulation yields no LTD

Function of learning Unsupervised learning (ch 10) –Feature selection, receptive fields, density estimation Supervised learning (ch 7) –Input-output mapping, feedback as teacher signal Reinforcement learning (ch 9) –Feedback in terms of reward, similar to control theory Hebbian learning (ch 8) –Biologically plausible + normalization –Covariance rule for (un)supervised learning –Occular dominance, maps

Rate model with fast time scale Neural activity as continuous rate, not spike train V is output neuron, u is vector input neurons, w is vector of weights If tau_r small wrt learning time:

Basic Hebb rule V and u are functions of time. Makes dynamics hard to solve. Alternative is to assume v,u from distribution p(v,u) and assume p time independent. Using v=w. u we get

Basis Hebb rule Hebb rule is unstable, because norm always increases Continuous differential equation can be simulated using Euler scheme

Covariance rule Basic Hebb rule describes only LTP since u, v positive LTD occurs when pre-synaptic activity co-occurs with low post synaptic activity Alternatively,

Covariance rule When Either rule produces Covariance rule is unstable

BCM rule Bienenstock, Munro, Cooper (1982): Requires both pre and post synaptic activity for learning For fixed threshold, the BMC rule is also unstable

BMC rule BMC rule can be made stable by threshold dynamics Tau_theta is smaller than tau_w BMC rule implements competition between synapses –Strenghtening one synapse, increases the threshold, makes strengthening of other synapses more difficult Such competition can also be implemented by normalization

Synaptic normalization Limit the sum of weights or sum of squared weights Impose this constraint rigidly, or dynamically Two examples: –Rigid scheme for sum of weights constraint (subtractive norm.) –Dynamic scheme for sum of squared weights (multipl. Norm.)

Subtractive normalization Subtractive normalization ensures that sum w does not change Not clear how to implement this rule biophysically (non- locality). We must add a constraint that weights are non-negative.

Multiplicative normalization Oja rule (1982) The rule implements the constraint dynamically:

Unsupervised learning Adapting the network for a set of tasks –Neural selectivity, receptive field –Cortical map Process depends partly on neural activity and partly not (axon growth) Ocular dominance –Adult neurons favor one eye over the other (layer 4 input from LGN) –Neurons are clustered in bands or stripes

Single post-synaptic neuron We analyze Eq. 8.5

Single post-synaptic neuron Solution in terms of eigenvalues of Q Eigen values are positive, so solution explodes. Asymptotically –e1 is the principle eigen direction –Neuron projects input onto this direction:

Single post-synaptic neuron Example with two weights. Weights grow indefinite, one positive one negative. Choice depends on initial conditions. Limit to [0, 1] yields different solutions depending in init value

Single post-synaptic neuron Subtractive normalization. Averaging over inputs: Analysis in terms of eigenvectors: –In ocular dominance e1=n/sqrt(n). W in direction of e1 has rhs equal to zero. Ie this component of w is unaltered –In other directions normalizing term is zero –W asymptotically dominated by second eigenvector

Hebbian development of ocular dominance Subtractive normalization may solve this, since e1=n weight grows proportional to e2=(1,-1)

Single post-synaptic neuron Using the Oja rule Show that each eigenvector of Q is solution. One can show that only principal eigenvector is stable.

Single post-synaptic neuron A: Behavior of –Unnormalized Hebbian learning –Multiplicative normalization (Oja rule) gives w propto e1. This is similar to PCA. B: Shifting mean of u may yield different solution C: Covariance based learning corrects for mean Saturation constraints may alter this conclusion

Hebbian development of ocular dominance Model layer 4 cell with input from two LGN cells, each associated with different eye.

Hebbian development of orientation selectivity Cortical receptive fields from LGN. ON-center (white) and OFF-center (black) cells excite cortical neuron. Spectral analysis also applicable to non-linear systems Dominant eigenvector uniform. Non-uniform receptive fields result from sub-dominant eigenvector

Multiple postsynaptic neurons

Hebbian development of ocular dominance stripes A: model with right and left eye inputs drive array of cortical neurons B: ocular dominance maps. Top: light and dark areas in top and bottom cortical layer show ocular dominance in cat primary cortex. Bottom: model of 512 neurons with Hebbian learning

Hebbian development of ocular dominance stripes Use 8.31 with W=(w+,w-) the n*2 matrix. See book. Subtractive normalization dw+/dt=0 Ocular dominance pattern given by largest eig. vector of K

Hebbian development of ocular dominance stripes Suppose K translation invariant –Periodic boundary conditions simulating a patch of cortex ignoring boundary effects –Eigenvectors are –Eigenvalues are Fourrier components –Solution of learning is spatially periodic (viz. fig 8.7)

Competitive Hebb rule A more abstract model to model non-linear effects –Large range inhibition yields soft-max –Soft-max is locally averaged Can also produce specialization without subtractive normalization

Competitive Hebb rule One dimensional ocular dominance map for entire visual field (rather than one input as before) –a labels cortex, b labels retina, periodic boundaries (torus) –Retinotopic receptive fields develop –A: weights after learning. B: –C:

Feature based models Multi dimensional input (retinal location, ocular dominance, orientation preference,....) Replace input neurons by input features, W_ab is selectivity of neuron a to feature b –Feature u1 is location on retina in coordinates –Feature u2 is ocularity (how much is the stimulus prefering left over right eye), a single number The couplingto neuron a describe the preferred stimulus Activity of output a is

Feature based models Output is soft-max Combined with lateral averaging –Self-organizing map (SOM) –Elastic net

Feature based models optical imaging shows ocularity and orientation selectivity in macaque primary visual cortex. Dark lines are ocular dominance boundaries, light lines are iso-orientation contours. Pin wheel singularities, linear zones

Feature based models Elastic net output SOM, competitive Hebbian rules can produce similar output

Anti-hebbian modification Another way to make different outputs specialize is by adaptive anti-Hebbian modification Consider Oja rule: Each output a will be identical Anti-Hebbian modification is shown at synapses from parallel fibers to Purkinje cells in cerebellum. Combination yields different eigenvectors as outputs

Timing based rules Left: in vitro cortical slice. Right: in vivo xenopus tadpoles LTP when pre-synaptic spike precedes post-synaptic spike LTD when pre-synaptic spike follows post-synaptic spike

Timing based rules Simulating spike-time plasticity requires spiking neurons Approximate description with firing rates H(t) positive/negative for t positive/negative

Timing based plasticity and prediction Consider array of neurons labeled by a with receptive fields f_a(s) (dashed and solid curves) Timing based learning rule. Stimulus s moves from left to right.

Timing based plasticity and prediction If a left of b, then link a to b is strengthened and link b to a is weakened. Receptive field of neuron a is asymmetrically deformed (A solid bold line) Prediction: next presentation of s(t) will activate a earlier. In agreement with shift of place field mean when rats run around track (B).

Supervised Hebbian learning Weight decay: Asymptotic solution is

Classification and the Perceptron If output values are +/- 1 the model implements a classifier, called the Perceptron: –The weight vector defines a separating hyper plane: = gamma. –The perceptron can solve problems that are ‘linearly separable’