Computational Intelligence: Methods and Applications

Slides:



Advertisements
Similar presentations
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Advertisements

Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Computational Intelligence: Methods and Applications Lecture 10 SOM and Growing Cell Structures Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Projection Pursuit. Projection Pursuit (PP) PCA and FDA are linear, PP may be linear or non-linear. Find interesting “criterion of fit”, or “figure of.
EE141 1 Self-organization and error correction Janusz A. Starzyk
1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
EE141 1 Self-organization and error correction Janusz A. Starzyk Computational Intelligence Based on a course taught by Prof. Randall O'ReillyRandall O'Reilly.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Self-Organizing Maps Corby Ziesman March 21, 2007.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
Computational Intelligence: Methods and Applications Lecture 8 Projection Pursuit & Independent Component Analysis Włodzisław Duch Dept. of Informatics,
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.
Intro. ANN & Fuzzy Systems Lecture 20 Clustering (1)
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Machine Learning 12. Local Models.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Advanced Topic in Cognitive Neuroscience and Embodied Intelligence
Data Mining, Neural Network and Genetic Programming
Ch10 : Self-Organizing Feature Maps
Other Applications of Energy Minimzation
Computational Intelligence: Methods and Applications
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Computational Intelligence: Methods and Applications
Self-Organizing Maps Corby Ziesman March 21, 2007.
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
Self-Organizing Maps (SOM) (§ 5.5)
Introduction to Cluster Analysis
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Feature mapping: Self-organizing Maps
Computational Intelligence: Methods and Applications
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch (c) 1999. Tralvex Yeap. All Rights Reserved

Brain maps Tactile, motor, and olfactory data are most basic. Such data is analyzed by animal brains using topographical organization of the brain cortex. Somatosensory maps for tactile, temperature, pain, itching, and vibration signals. Motor maps in frontal neocortex and cerebellum cortex. Auditory tonotopic maps in temporal cortex. Visual orientation maps in primary visual cortex. Multimodal orientation maps (superior colliculus) (c) 1999. Tralvex Yeap. All Rights Reserved

Senso-motoric map Visual signals are analyzed by maps coupled with motor maps and providing senso-motoric responses. Figure from: P.S. Churchland, T.J. Sejnowski, The computational brain. MIT Press, 1992 (c) 1999. Tralvex Yeap. All Rights Reserved

Somatosensoric and motor maps (c) 1999. Tralvex Yeap. All Rights Reserved

Representation of fingers Before After stimulation stimulation Hand Face (c) 1999. Tralvex Yeap. All Rights Reserved

Models of self-organization SOM or SOFM (Self-Organized Feature Mapping) – self-organizing feature map, one of the simplest models. How can such maps develop spontaneously? Local neural connections: neurons interact strongly with those nearby, but weakly with those that are far (in addition inhibiting some intermediate neurons). History: von der Malsburg and Willshaw (1976), competitive learning, Hebb mechanisms, „Mexican hat” interactions, models of visual systems. Amari (1980) – models of continuous neural tissue. Kohonen (1981) - simplification, no inhibition; leaving two essential factors: competition and cooperation. (c) 1999. Tralvex Yeap. All Rights Reserved

Self-Organized Map: idea Data: vectors XT = (X1, ... Xd) from d-dimensional space. Grid of nodes, with local processor (called neuron) in each node. Local processor # j has d adaptive parameters W(j). Goal: change W(j) parameters to recover data clusters in X space. (c) 1999. Tralvex Yeap. All Rights Reserved

SOM algorithm: competition Nodes should calculate similarity of input data to their parameters. Input vector X is compared to node parameters W. Similar = minimal distance or maximal scalar product. Competition: find node j=c with W most similar to X. Node number c is most similar to the input vector X It is a winner, and it will learn to be more similar to X, hence this is a “competitive learning” procedure. Brain: those neurons that react to some signals pick it up and learn. (c) 1999. Tralvex Yeap. All Rights Reserved

SOM algorithm: cooperation Cooperation: nodes on a grid close to the winner c should behave similarly. Define the “neighborhood function” O(c): t – iteration number (or time); rc – position of the winning node c (in physical space, usually 2D). ||r-rc|| – distance from the winning node, scaled by sc(t). h0(t) – slowly decreasing multiplicative factor The neighborhood function determines how strongly the parameters of the winning node and nodes in its neighborhood will be changed, making them more similar to data X (c) 1999. Tralvex Yeap. All Rights Reserved

SOM algorithm: dynamics Adaptation rule: take the winner node c, and those in its neighborhood O(rc), change their parameters making them more similar to the data X Select randomly new sample vector X, and repeat. Decrease h0(t) slowly until there will be no changes. Result: W(i) ≈ the center of local clusters in the X feature space Nodes in the neighborhood point to adjacent areas in X space (c) 1999. Tralvex Yeap. All Rights Reserved

SOM algorithm XT=(X1, X2 .. Xd), samples from feature space. Create a grid with nodes i = 1 .. K in 1D, 2D or 3D, each node with d-dimensional vector W(i)T = (W1(i) W2(i) .. Wd(i)), W(i) = W(i)(t), changing with t – discrete time. Initialize: random small W(i)(0) for all i=1...K. Define parameters of neighborhood function h(|ri-rc|/s(t),t) Iterate: select randomly input vector X Calculate distances d(X,W(i)), find the winner node W(c) most similar (closest to) X Update weights of all neurons in the neighborhood O(rc) Decrease the influence h0(t) and shrink neighborhood s(t). If in the last T steps all W(i) changed less than e then stop. (c) 1999. Tralvex Yeap. All Rights Reserved

1D network, 2D data Position in the feature space Processors in 1D array (c) 1999. Tralvex Yeap. All Rights Reserved

2D network, 3D data ' ' = network W(i) '' parameters processors x y z oo o o' 'o o=data 2-D grid with W assigned to input neurons feature space processors ' = network W(i) parameters ' '' (c) 1999. Tralvex Yeap. All Rights Reserved

Training process Java demos: http://www.neuroinformatik.ruhr-uni-bochum.de/ ini/VDM/research/gsn/DemoGNG/GNG.html (c) 1999. Tralvex Yeap. All Rights Reserved

2D => 2D, square Initially all W0, pointing to the center of the 2D space, but over time they learn to point at adjacent positions with uniform distribution. (c) 1999. Tralvex Yeap. All Rights Reserved

2D => 1D in a triangle The line in the data space forms a Peano curve, an example of a fractal. Why? (c) 1999. Tralvex Yeap. All Rights Reserved

Map distortions Initial distortions may slowly disappear or may get frozen ... giving the user a completely distorted view of reality. (c) 1999. Tralvex Yeap. All Rights Reserved

Learning constant Large learning constants: point on the map move constantly, slow stabilization. Uniform distribution of data points within the torus lead to formation of maps that have uniform distribution of parameters (codebook vectors). (c) 1999. Tralvex Yeap. All Rights Reserved

Demonstrations with GNG Growing Self-Organizing Networks demo Parameters in the SOM program: t – iterations e(t) = ei (ef / ei )t/tmax to reduce the learning step s(t) = si (sf / si )t/tmax to reduce the neighborhood size Try some 1x30 maps to see forming of Peano curves. (c) 1999. Tralvex Yeap. All Rights Reserved