X0 xn w0 wn o Threshold units SOM.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
© sebis 1JASS 05 Information Visualization with SOMs Information Visualization with Self-Organizing Maps Software Engineering betrieblicher Informationssysteme.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks based on Competition
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
NEURAL NETWORKS FOR DATA MINING
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised learning: simple competitive learning Biological background: Neurons are wired topographically, nearby neurons connect to nearby neurons.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Creating fuzzy rules from numerical data using a neural network
Introduction to Cluster Analysis
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
A Neural Net For Terrain Classification
Presentation transcript:

x0 xn w0 wn o Threshold units SOM

Self-Organizing Maps

Inputs Neurons Teuvo Kohonen

Self-Organizing Maps : Origins Ideas first introduced by C. von der Malsburg (1973), developed and refined by T. Kohonen (1982) Neural network algorithm using unsupervised competitive learning Primarily used for organization and visualization of complex data Biological basis: ‘brain maps’

Self-Organizing Maps SOM - Architecture 2d array of neurons Lattice of neurons (‘nodes’) accepts and responds to set of input signals Responses compared; ‘winning’ neuron selected from lattice Selected neuron activated together with ‘neighbourhood’ neurons Adaptive process changes weights to more closely resemble inputs 2d array of neurons Set of input signals (connected to all neurons in lattice) Weighted synapses x1 x2 x3 xn ... wj1 wj2 wj3 wjn j

Self-Organizing Maps Randomly initialise all weights SOM – Algorithm Overview Randomly initialise all weights Select input vector x = [x1, x2, x3, … , xn] Compare x with weights wj for each neuron j to determine winner Update winner so that it becomes more like x, together with the winner’s neighbours Adjust parameters: learning rate & ‘neighbourhood function’ Repeat from (2) until the map has converged (i.e. no noticeable changes in the weights) or pre-defined no. of training cycles have passed

Initialisation Randomly initialise the weights

Finding a Winner Euclidean distance Find the best-matching neuron w(x), usually the neuron whose weight vector has smallest Euclidean distance from the input vector x The winning node is that which is in some sense ‘closest’ to the input vector ‘Euclidean distance’ is the straight line distance between the data points, if they were plotted on a (multi-dimensional) graph Euclidean distance between two vectors a and b, a = (a1,a2,…,an), b = (b1,b2,…bn), is calculated as: Euclidean distance

wj(t +1) = wj(t) + (t) (x)(j,t) [x - wj(t)] Weight Update SOM Weight Update Equation wj(t +1) = wj(t) + (t) (x)(j,t) [x - wj(t)] “The weights of every node are updated at each cycle by adding Current learning rate × Degree of neighbourhood with respect to winner × Difference between current weights and input vector to the current weights” Example of (t) Example of (x)(j,t) L. rate No. of cycles x-axis shows distance from winning node y-axis shows ‘degree of neighbourhood’ (max. 1)

Kohonen’s Algorithm jth input Winner ith

Neighborhoods Square and hexagonal grid with neighborhoods based on box distance Grid-lines are not shown

i One-dimensional i Two-dimensional Neighborhood of neuron i

A neighborhood function (i, k) indicates how closely neurons i and k in the output layer are connected to each other. Usually, a Gaussian function on the distance between the two neurons in the layer is used:  position of i position of k

Clustering of the Self Organising Map A simple toy example Clustering of the Self Organising Map

However, instead of updating only the winning neuron i However, instead of updating only the winning neuron i*, all neurons within a certain neighborhood Ni* (d), of the winning neuron are updated using the Kohonen rule. Specifically, we adjust all such neurons i Ni* (d), as follow Here the neighborhood Ni* (d), contains the indices for all of the neurons that lie within a radius d of the winning neuron i*.

Topologically Correct Maps The aim of unsupervised self-organizing learning is to construct a topologically correct map of the input space.

Self Organizing Map Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i Before learning i w After learning i w

Network Features Input nodes are connected to every neuron The “winner” neuron is the one whose weights are most “similar” to the input Neurons participate in a “winner-take-all” behavior The winner output is set to 1 and all others to 0 Only weights to the winner and its neighbors are adapted

P 1 2 3 4 5 6 7 8 9 wi

P1 P2 1 2 3 4 5 6 7 8 9 wi2 wi1

output input (n-dimensional) winner

Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space: 20 100 10000 25000 1000

Some nice illustrations

Some nice illustrations

Some nice illustrations

Self Organizing Map Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbors of the winner share the “prize” (The “postcode lottery” principle) After learning, neurons with similar weights tend to cluster on the map

Conclusion Advantages SOM is Algorithm that projects high-dimensional data onto a two-dimensional map. The projection preserves the topology of the data so that similar data items will be mapped to nearby locations on the map. SOM still have many practical applications in pattern recognition, speech analysis, industrial and medical diagnostics, data mining Disadvantages Large quantity of good quality representative training data required No generally accepted measure of ‘quality’ of a SOM e.g. Average quantization error (how well the data is classified)

Topologies (gridtop, hextop, randtop) pos = gridtop(3,2) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom (pos) pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom (pos)

pos = gridtop(8,10); plotsom(pos)

pos = hextop(2,3) pos = 0 1.0000 0.5000 1.5000 0 1.0000 0 0 0.8660 0.8660 1.7321 1.7321

pos = hextop(3,2) pos = 0 1.0000 2.0000 0.5000 1.5000 2.5000 0 0 0 0.8660 0.8660 0.8660 plotsom(pos)

pos = hextop(8,10); plotsom(pos)

pos = randtop(2,3) pos = 0 0.7787 0.4390 1.0657 0.1470 0.9070 0 0.1925 0.6476 0.9106 1.6490 1.4027

pos = randtop(3,2) pos = 0 0.7787 1.5640 0.3157 1.2720 2.0320 0.0019 0.1944 0 0.9125 1.0014 0.7550

pos = randtop(8,10); plotsom(pos)

Distance Funct. (dist, linkdist, mandist, boxdist) pos2 = [ 0 1 2; 0 1 2] pos2 = 0 1 2 D2 = dist(pos2) D2 = 0 1.4142 2.8284 1.4142 0 1.4142 2.8284 1.4142 0

pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom(pos) d = boxdist(pos) d = 0 1 1 1 2 2 1 0 1 1 2 2 1 1 0 1 1 1 1 1 1 0 1 1 2 2 1 1 0 1 2 2 1 1 1 0

pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom(pos) d=linkdist(pos) d = 0 1 1 2 2 3 1 0 2 1 3 2 1 2 0 1 1 2 2 1 1 0 2 1 2 3 1 2 0 1 3 2 2 1 1 0

The Manhattan distance between two vectors x and y is calculated as D = sum(abs(x-y)) Thus if we have W1 = [ 1 2; 3 4; 5 6] W1 = 1 2 3 4 5 6 and P1= [1;1] P1 = 1 then we get for the distances Z1 = mandist(W1,P1) Z1 = 5 9

A One-dimensional Self-organizing Map angles = 0:2*pi/99:2*pi; P = [sin(angles); cos(angles)]; plot(P(1,:),P(2,:),'+r')

net = newsom([-1 1;-1 1],[30]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances) The map can now be used to classify inputs, like [1; 0]: Either neuron 1 or 10 should have an output of 1, as the above input vector was at one end of the presented input space. The first pair of numbers indicate the neuron, and the single number indicates its output. p = [1;0]; a = sim (net, p) a = (1,1) 1

x = -4:0.01:4 P = [x;x.^2]; plot(P(1,:),P(2,:),'+r') net = newsom([-10 10;0 20],[10 10]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances)