Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.

Slides:



Advertisements
Similar presentations
Computational Neuroscience 03 Lecture 8
Advertisements

5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Part III: Models of synaptic plasticity BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters Laboratory.
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Spike timing dependent plasticity Homeostatic regulation of synaptic plasticity.
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Overview over different methods
The “Humpty Dumpty” problem
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
CSE 153Modeling Neurons Chapter 2: Neurons A “typical” neuron… How does such a thing support cognition???
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Unsupervised learning
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
The BCM theory of synaptic plasticity.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Network Unsupervised Learning
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
Plasticity and learning Dayan and Abbot Chapter 8.
CSC321: Neural Networks Lecture 13: Learning without a teacher: Autoencoders and Principal Components Analysis Geoffrey Hinton.
What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators.
Spike timing dependent plasticity - STDP Markram et. al ms -10 ms Pre before Post: LTP Post before Pre: LTD.
Unsupervised learning
Biological Modeling of Neural Networks Week 6 Hebbian LEARNING and ASSOCIATIVE MEMORY Wulfram Gerstner EPFL, Lausanne, Switzerland 6.1 Synaptic Plasticity.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Artificial Intelligence & Neural Network
Adaptive Algorithms for PCA PART – II. Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
Computational Cognitive Neuroscience Lab Today: Model Learning.
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Keeping the neurons cool Homeostatic Plasticity Processes in the Brain.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Biointelligence Laboratory, Seoul National University
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Financial Informatics –XVII: Unsupervised Learning
Presented by Rhee, Je-Keun
CSE 573 Introduction to Artificial Intelligence Neural Networks
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Backpropagation.
7. Associators and synaptic plasticity
Thomas Akam, Dimitri M. Kullmann  Neuron 
Experience-Dependent Asymmetric Shape of Hippocampal Receptive Fields
Patrick Kaifosh, Attila Losonczy  Neuron 
Simple learning in connectionist networks
An extended Hebbian model of unsupervised learning
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian learning rules weight normalization intrinsic plasticity spike-timing dependent plasticity developmental models based on Hebbian learning self-organizing maps

Jochen Triesch, UC San Diego, 2 A Taxonomy of Learning Settings Unsupervised Self-supervised Reinforcement Imitation Instruction Supervised increasing amount of “help” from the environment

Jochen Triesch, UC San Diego, 3 Unsupervised Hebbian Learning Hebb’s Idea (1949): If firing of neuron A contributes to firing of neuron B, strengthen connection from A to B. Possible uses: (after Hertz,Krogh,Palmer) Familiarity detection Principal Component Analysis Clustering Encoding Donald Hebb

Jochen Triesch, UC San Diego, 4 Network Self-organization Three ingredients for self-organization: positive feedback loops (self-amplification) limited resources leading to competition between elements cooperation between some elements possible correlated activity leads to weight growth (Hebb) weight growth leads to more correlated activity weight growth limited due to competition connection weights activity patterns

Jochen Triesch, UC San Diego, 5 Long term potentiation (LTP) and Long term depression (LTD) observed in neocortex, cerebellum, hippocampus, … requires paired pre- and postsynaptic activity

Jochen Triesch, UC San Diego, 6 Local Learning Biological plausibility of learning rules: want “local learning” rule. Weight change should be computed only from information that is locally available at the synapse: pre-synaptic activity post-synaptic activity strength of this synapse … But not: any other pre-synaptic activity any other weight … Also: weight change only on information available at that time. Knowledge of detailed history of pre- and post-synaptic activity is not plausible. (Locality in time domain) These are problems for many learning rules derived from information theoretic ideas.

Jochen Triesch, UC San Diego, 7 Single linear unit inputs draw from some probability distribution simple Hebb rule moves weight vector in direction of current input frequent inputs have bigger impact on resulting weight vector: familiarity Problem: weights can grow without bounds, need competition … simple Hebbian learning, η is learning rate Called correlation based learning, because average weight change proportional to correlation between pre- and post-synaptic activity:

Jochen Triesch, UC San Diego, 8 Reminder: Correlation and Covariance correlation: covariance: mean:

Jochen Triesch, UC San Diego, 9 Continuous time formulation another good reason to call this a correlation based rule! averaging across stimulus ensemble

Jochen Triesch, UC San Diego, 10 Covariance rules justifying the name “covariance” rule! Simple Hebbian rule only allows for weight growth (pre- and post-synaptic firing rates are non-negative numbers: no account of LTD. Covariance rules are one way of fixing this. Averaging RHS over the stimulus ensemble and using the mean input as the threshold gives:

Jochen Triesch, UC San Diego, 11 Illustration of correlation and covariance rules: A.either rule on zero mean data B.correlation rule for non-zero mean C.covariance rule for non-zero mean Note: weight vector will grow without bounds in both cases

Jochen Triesch, UC San Diego, 12 The need to limit weight growth simple Hebb and covariance rules are unstable: lead to unbounded weight growth Remedy 1: weight clipping: don’t let weight grow above/below certain limits w max /w min Remedy 2: some form of weight normalization Multiplicative: subtract something from each weight w i that is proportional to that weight w i Subtractive: subtract something from each weight w i that is the same for all weights w i.. This leads to strong competition between weights, typically resulting in all weights going to w max or w min

Jochen Triesch, UC San Diego, 13 Weight normalization Two most popular schemes: sum of weights equals one (all weights assumed positive) sum of squared weights equals one Idea: force weight vector to lie on a “constraint surface”

Jochen Triesch, UC San Diego, 14 Subtractive vs. Multiplicative Different ways of going back to constraint surface

Jochen Triesch, UC San Diego, 15 Idea: Neuron may try to maintain certain average firing rate (homoeostatic plasticity) Can be combined with both multiplicative and subtractive constraints Activity Dependent Synaptic Normalization Turrigiano and Nelson, Nature Rev Neuro. 5: (2004)

Jochen Triesch, UC San Diego, 16 Turrigiano and Nelson, Nature Rev Neuro. 5: (2004) Observation: reducing activity leads to changes causing increased spontaneous activity increasing activity leads to changes causing reduced spontaneous activity

Jochen Triesch, UC San Diego, 17 Turrigiano and Nelson, Nature Rev Neuro. 5: (2004) Observation: synaptic strengths (mEPSC: miniature excitatory postsynaptic current, caused by release of single transmitter vesicle) are scaled multiplicatively

Jochen Triesch, UC San Diego, 18 Turrigiano and Nelson, Nature Rev Neuro. 5: (2004) Observation: similar changes to inhibitory synapses, consistent with homeostasis idea

Jochen Triesch, UC San Diego, 19 Applications of Hebbian Learning: Ocular Dominance fixed lateral weights, Hebbian learning of feedforward weights, exact form of weight competition is very important! Such models can qualitatively predict effects of blocking input from one eye, etc.

Jochen Triesch, UC San Diego, 20 Oja’s and Yuille’s rules idea: subtract term proportional to V 2 to limit weight growth special form of multiplicative normalization leads to extraction of first principal component “non-local” Oja: Yuille: some more…

Jochen Triesch, UC San Diego, 21 Hebbian Learning Rules for Principal Component Analysis Oja: Sanger: both rules give the first Eigenvectors of the correlation matrix

Jochen Triesch, UC San Diego, 22 Trace rules, slowness, and temporal coherence Alternative goal for coding: find slowly varying sources Motivation: pixel brightness changes fast on retina due to shifts, rotation, change in lighting, etc. but identity of person in front of you stays the same for prolonged time Invariance learning: want filters that are invariant with respect to such transformations. Note: these invariances always require non-linear filters (linear ICA won’t help) … output should vary slowly Trace rule (Foldiak, 1990): force unit to be slow+Hebbian learning Models for development of complex cells in V1: “keep unit active while edge moves across receptive field”

Jochen Triesch, UC San Diego, 23 Spike timing dependent plasticity pre-synaptic spike before post-synaptic spike: potentiation pre-synaptic spike after post-synaptic spike: potentiation “predictive learning”

Jochen Triesch, UC San Diego, 24 A: simulated place field shift in model (light gray curve to heavy black curve) B: predictive learning in rat hippocampal place cell: during successive laps, the place field of a neuron shifts to earlier and earlier locations along the track