Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Bioinspired Computing Lecture 14
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
A U S T R A L I A ’ S I N T E R N A T I O N A L U N I V E R S I T Y
Un Supervised Learning & Self Organizing Maps Learning From Examples
EE141 1 Self-organization and error correction Janusz A. Starzyk
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
ART (Adaptive Resonance Theory)
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
NEURAL NETWORKS FOR DATA MINING
Soft Computing Lecture 14 Clustering and model ART.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
UNSUPERVISED LEARNING NETWORKS
1 An Anti-Spam filter based on Adaptive Neural Networks Alexandru Catalin Cosoi Researcher / BitDefender AntiSpam Laboratory
A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Connectionism, Evolution, and the Brain Prof.dr. Jaap Murre University of Amsterdam University of Maastricht
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
BIOPHYSICS 6702 – ENCODING NEURAL INFORMATION
Unsupervised Learning Networks
Counter propagation network (CPN) (§ 5.3)
Dr. Unnikrishnan P.C. Professor, EEE
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Computational Intelligence: Methods and Applications
Adaptive Resonance Theory
Artificial Neural Networks
Self-Organizing Maps (SOM) (§ 5.5)
Adaptive Resonance Theory
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Unsupervised Networks Closely related to clustering
Presentation transcript:

Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht

Modular neural networks Why modularity? Kohonen’s Self-Organizing Map (SOM) Grossberg’s Adaptive Resonance Theory Categorizing And Learning Module, CALM (Murre, Phaf, & Wolters, 1992)

L..C...A...P..B LAPCAPCAB Modularity: limitations on connectivity

Modularity Scalability Re-use in design and evolution Coarse steering of development; learning provides fine structure Improved generalization because of fewer connections Strong evidence from neurobiology

Self-Organizing Maps (SOMs) Topological Representations

Map formation in the brain Topographic maps omnipresent in the sensory regions of the brain –retinotopic maps: neurons ordered as the locations of their visual field on the retina –tonotopic maps: neurons ordered according to tone for which they are sensitive –maps in somatosensory cortex: neurons ordered according to body part for which they are sensitive –maps in motor cortex: neurons ordered according to muscles they control

Auditory cortex has a tonotopic map that is hidden in the transverse temporal gyrus

Somatosensory maps

Somatosensory maps II © Kandel, Schwartz & Jessell, 1991

Many maps show continued plasticity Reorganization of sensory maps in primate cortex

Kohonen maps Teuvo Kohonen was the first to show how maps may develop Self-Organizing Maps (SOMs) Demonstration: the ordering of colors (colors are vectors in a 3-dimensional space of brightness, hue, saturation).

Kohonen algorithm Finding the activity bubble Updating the weights for the nodes in the active bubble

Finding the activity bubble Lateral inhibition

Finding activity bubble II Find the winner Activate all nodes in the neighbourhood of the winner

Updating the weights Move weight vector of winner towards the input vector Do the same for the active neighbourhood nodes  weight vectors of neigboring nodes will start resembling each other

Simplest implementation Weight vectors & input patterns all have length 1 (e.i.,  (w ij ) 2 = 1 ) Find node whose weight vector has mimimal distance to the input vector: min.  (a j - w ij ) 2 Activate all nodes in neighbourhood radius N t Update weights of active nodes by moving weights towards the input vector:  w ij =  t * ( a j - w ij ) w ij (t+1) = w ij (t) +  t * ( a j - w ij (t) )

Results of Kohonen © Kohonen, 1982

Influence of neighbourhood radius © Kohonen, 1982 Larger neighbourhood size leads to faster learning

Results II: the phonological typewriter © Kohonen, 1988 humpplia (Finnish)

Conclusions for SOM Elegant Prime example of unsupervised learning Biologically relevant and plausible Very good at discovering structure: –discovering categories –mapping the input onto a topographic map

Adaptive Resonance Theory (ART) Stephen Grossberg (1976)

Grossberg’s ART Stability-Plasticity Dilemma How to disentangle overlapping patterns?

ART-1 Network

Phases of Classification (a) Initial pattern (b) Little support from F2 (c) Reset: second try starts (d) Different category (F2) node gives sufficient support: resonance

Categorizing And Learning Module (CALM) Murre, Phaf, and Wolters (1992)

CALM: Categorizing And Learning Module CALM module is basic unit in multi- modular networks Categorizes arbitrary input activation patterns and retains this categorization over time CALM is developed for unsupervised learning but also works with supervision Motivated by psychological, biological, and practical considerations

Important design principles in CALM Modularity Novelty dependent categorization and learning Wiring scheme inspired by neocortical minicolumn

Elaboration versus activation Novelty dependent categorization and learning derived from memory psychology (Graf and Mandler, 1984) –Elaboration learning: Active formation of new associations –Activation learning: Passive strengthening of pre-existing associations In CALM: Relative novelty of patterns determines either type of learning

How elaboration learning is implemented in CALM Novel pattern –> Much competition –> High activation of Arousal Node –> High activation of External Node –> High learning parameter –> High noise amplitude on Representation Nodes Elaboration learning drives: –Self-induced noise –Self-induced learning

Self-induced noise (cf. Bolzmann Machine) Non-specific activations from sub-cortical structures in cortex Optimal level of arousal for optimal learning performance (Yerkes-Dodson Law) Noise drives search for new representations Noise breaks symmetry deadlocks Noise may lead to convergence in deeper attractors

Self-induced learning Possible role of hippocampus and basal forebrain (cf. modulatory system in TraceLink) Shift from implicit to explicit memory Remedy of the Plasticity-Stability Dilemma

Stability-Plasticity Dilemma or the Problem of Real-Time Learning How can a learning system be designed to remain plastic, or adaptive, in response to significant events and yet remain stable in response to irrelevant events?” Carpenter and Grossberg, 1988, p.77)

Novelty dependent categorization Novel patterns implies search for new representations Search process is driven by novelty dependent noise

Novelty dependent learning Novel pattern: increased learning rate Old pattern: base-rate learning

Learning rule derived from Grossberg’s ART Extension of the Hebb Rule Increases and decreases in weight Only applied to excitatory connections (no sign changes allowed) Weights are bounded between 0 and 1 Allows separation of complex patterns from their composing subpatterns In contrast to ART: weight change is influenced by weighed neighbor activations

CALM Learning Rule Weight from node j to node i Neighbor activations k dampen the weight change

Learning rule

Avoid neurobiologically implausible architectures Random organization of excitatory and inhibitory connections Learning may change a connections sign Single nodes may give off both excitatory and inhibitory connections

Neurons form a dichotomy (Dale’s Law) Neurons involved in long-range connections in cortex give off excitatory connections Inhibitory neurons in cortex are inhibitory

CALM: Categorizing And Learning Module By Murre, Phaf, & Wolters (1992)

Activation rule

ParameterCALM Up weight0.5 Down weight-1.2 Cross weight10.0 Flat weight-1.0 High weight-0.6 Low weight0.4 AE weight1.0 ER weight0.25 wµE0.05 k0.05 K1.0 L1.0 d0.01 Parameters Possible parameters for the CALM module They do not need to adjusted for each new architecture

Main processes in the CALM module

Inhibition between nodes Example: inhibition in CALM