The “Humpty Dumpty” problem

Slides:



Advertisements
Similar presentations
Computational Neuroscience 03 Lecture 8
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Overview over different methods
How Patterned Connections Can Be Set Up by Self-Organization D.J. Willshaw C. Von Der Malsburg.
Jochen Triesch, UC San Diego, 1 Pattern Formation in Neural Fields Goal: Understand how non-linear recurrent dynamics can.
CSE 153 Cognitive ModelingChapter 3 Representations and Network computations In this chapter, we cover: –A bit about cortical architecture –Possible representational.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
Color vision Different cone photo- receptors have opsin molecules which are differentially sensitive to certain wavelengths of light – these are the physical.
Alan L. Yuille. UCLA. Dept. Statistics and Psychology. Neural Prosthetic: Mind Reading. STATS 19 SEM Talk 6. Neural.
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Un Supervised Learning & Self Organizing Maps Learning From Examples
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Critical periods A time period when environmental factors have especially strong influence in a particular behavior. –Language fluency –Birds- Are you.
Techniques for studying correlation and covariance structure
Understanding visual map formation through vortex dynamics of spin Hamiltonian models Myoung Won Cho BK21 Frontier Physics.
Unsupervised learning
1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
The BCM theory of synaptic plasticity.
Neural Information in the Visual System By Paul Ruvolo Bryn Mawr College Fall 2012.
Critical periods in development - “nature” vs. “nurture”
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Plasticity and learning Dayan and Abbot Chapter 8.
Unsupervised learning
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Human vision Jitendra Malik U.C. Berkeley. Visual Areas.
Adaptive Algorithms for PCA PART – II. Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
381 Self Organization Map Learning without Examples.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári.
Computational Cognitive Neuroscience Lab Today: Model Learning.
Understanding early visual coding from information theory By Li Zhaoping Lecture at EU advanced course in computational neuroscience, Arcachon, France,
3.Learning In previous lecture, we discussed the biological foundations of of neural computation including  single neuron models  connecting single neuron.
Sensory Neural Systems 5 February 2008 Rachel L. León
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
1 Perception and VR MONT 104S, Spring 2008 Lecture 3 Central Visual Pathways.
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Spiking Neuron Networks
Self-Organizing Network Model (SOM) Session 11
Biointelligence Laboratory, Seoul National University
A principled way to principal components analysis
Developmental neuroplasticity
Volume 56, Issue 2, Pages (October 2007)
Covariation Learning and Auto-Associative Memory
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Volume 56, Issue 2, Pages (October 2007)
Presentation transcript:

The “Humpty Dumpty” problem figure taken from Churchland and Sejnowski Many levels of observation/description: how do we get them back together? This is where computational models are at their best.

Prelude: Self-Organization genome: ~109 bits brain: ~1015 connections → thus, genome can’t explicitly code wiring pattern of brain Where does pattern come from? some through learning from external inputs but a lot of structure seems to develop before sensory input has a chance to shape the system Where does structure come from?

Self-Organization: “structure for free” figures taken from Haken “street” patterns in clouds presence of water droplets facilitates condensation of more vapour

wave pattern in chemical reaction figure taken from Haken wave pattern in chemical reaction presence of one reaction product catalyzes production of others

presence of one convection cell stabilizes presence of its neighbors Benard system figures taken from Kelso presence of one convection cell stabilizes presence of its neighbors

Social insects: termite mounds activity of one termite encourages others to “follow its path” http://iridia.ulb.ac.be/~mdorigo/ACO/RealAnts.html

Principles of Self-Organization In all the examples: structure emerges from interactions of simple components in general, three ingredients: positive feedback loops (self-amplification) cooperation between some elements limited resources: competition between elements Let’s keep looking for these…

Organizing Principles for Learning in the Brain Associative Learning: Hebb rule and variations, self-organizing maps Adaptive Hedonism: Brain seeks pleasure and avoids pain: conditioning and reinforcement learning Imitation: Brain specially set up to learn from other brains? Imitation learning approaches Supervised Learning: Of course brain has no explicit teacher, but timing of development may lead to some circuits being trained by others

Unsupervised Hebbian Learning Idea: (D. Hebb, 1948?) Strengthening of connections between coactive units. physiological basis of Hebbian learning: Long Term Potentiation (LTP) figure taken from Dayan&Abbott

Spike timing dependent plasticity (STDP): temporal fine structure of correlations may be crucial! Synapse strengthened if pre-synaptic spike predicts post-synaptic spike figure taken from Dayan&Abbott

Network Self-organization Recall our 3 ingredients: - positive feedback loops (self-amplification) - cooperation between some elements - limited resources: competition between elements A connection weights activity patterns C B correlated activity leads to weight growth (Hebb) weight growth leads to more correlated activity weight growth limited due to competition

Single linear unit with simple Hebbian learning figure taken from Hertz et.al. one of the simplest model neurons Simple Hebbian learning: η is learning rate inputs draw from some probability distribution simple Hebb rule moves weight vector in direction of current input weight vector aligns with principal Eigenvector of input correlation matrix Problem: weights can grow without bounds, need competition

Excursion: Correlation and Covariance mean variance correlation matrix covariance matrix

Oja’s rule Problem: don’t want weights to grow without bounds, i.e., “what comes up must go down” Idea: subtract term proportional to V2 to limit weight growth still leads to extraction of first principal component of input correlation matrix

figure taken from Hertz et.al.

Other “Hebbian” learning rules: weight clipping subtractive or multiplicative normalization covariance rules extracting principal eigenvector of covariance matrix instead of correlation matrix BCM rule (has some exp. support) Yuille’s “non-local” rule, converges to Many, many ways to limit weight growth, sometimes very distinct differences in the learned weights!

correlation vs. covariance rules: figure taken from Dayan&Abbott

Multiple output units: Principal Component Analysis Oja: Sanger: Both rules give the first Eigenvectors of the correlation matrix

Applications of Hebbian Learning: Retinotopic Connections Retina Tectum Retinotopic: neighboring retina cells project to neighboring tectum cells, i.e. topology is preserved, hence “retinotopic” Question: How do retina units know where to project to? Answer: Retina axons find tectum through chemical markers, but fine structure of mapping is activity dependent

Principal assumptions: - local excitation of neighbors in retina and tectum - Hebbian learning of feed-forward weights simulation result: localized RFs in tectum tectum x tectum y Retina Tectum Hebbian learning models account for a wide range of experiments: half retina, half tectum, graft rotation, … half retina experiment

Applications of Hebbian Learning: Ocular Dominance fixed lateral weights, Hebbian learning of feedforward weights, exact form of weight competition is very important! figure taken from Dayan&Abbott Such models can qualitatively predict effects of blocking input from one eye, etc.

Applications of Hebbian Learning: Visual Receptive Field Formation figure taken from Dayan&Abbott more about these things next week!

Applications of Hebbian Learning: Visual Cortical Map Formation primary visual cortex: retinal position, ocular dominance, orientation selectivity all mapped onto 2-dim. map. Figure taken from Tanaka lab web page

direction of neighbors The Elastic Net Model Consider the different dimensions “retinal position”, “ocular dominance”, etc. as N scalar input dimensions. The selectivity of a unit a can be described by its position in a N-dimensional space. Its coordinates Wab are the preferred values along each dimension b. A unit’s activity is modeled with Gaussians: retinal position ocularity move weight in direction of input move weight in direction of neighbors Learning rule:

Elastic Net Modeling Pinwheels simple elastic net model experiment optical im- aging study of V1 in macaque learning rule moves units’ selectivities through ab- stract feature space figure taken from Dayan&Abbott

Complex RF-LISSOM model Receptive Field Laterally Interconnected Synergetically Self-Organizing Map Figures taken from http://www.cs.texas.edu/users/nn/web-pubs/htmlbook96/sirosh/

LISSOM Demo: learns feedforward and lateral weights, inputs are elongated blobs (simulation needs power of supercomputer)

Summary: Hebbian Learning Models in Development There are many, many ways of modeling Hebbian learning: correlation & covariance based, timing dependent, etc. Different kinds of Hebbian learning models at different levels of abstraction have been applied to modeling some developmental phenomena, e.g.: emergence of retinotopic connections development of ocular dominance emergence of visual receptive fields cortical map formation Typically, the inputs of the network are assumed to have simple static distribution. Hardly any work on learning agents that interact with their environment and influence statistics of their sensory input (recall the kitten experiments we read about).

Four questions to discuss/think about Different levels of abstraction, e.g., compartmental, spiking point neurons learning with STDP, or simple model of rate coding neuron, abstract SOM like models; what is the right level of abstraction? Even at one level of abstraction there are many different “Hebbian” learning rules; is it important which one you use? What is the right one? The applications we discussed considered networks passively receiving sensory input and learning to code it; how can we model learning through interaction with environment? Why might it be important to do so? The problems we considered so far are very “low-level”, no hint of “behaviors” (like that observed in an infant preferential looking task) yet. How can we bridge this huge divide?