Kohonen Self-organizing Feature Maps

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Memristor in Learning Neural Networks
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Neural Networks based on Competition
An Introduction To The Backpropagation Algorithm Who gets the credit?
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Lecture 09 Clustering-based Learning
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Machine Learning 12. Local Models.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Neural Networks Winter-Spring 2014
Data Mining, Neural Network and Genetic Programming
Other Applications of Energy Minimzation
Unsupervised Learning and Neural Networks
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Unsupervised learning
Lecture 22 Clustering (3).
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Competitive Networks.
Computational Intelligence: Methods and Applications
Adaptive Resonance Theory
Clustering Techniques
An Introduction To The Backpropagation Algorithm
Competitive Networks.
Self-Organizing Maps (SOM) (§ 5.5)
Self-organizing map numeric vectors and sequence motifs
Adaptive Resonance Theory
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Kohonen Self-organizing Feature Maps Order out of randomness

Copyright Gene A. Taglairini, PhD Kohonen Network Model Neurons Inputs 11/26/2018 Copyright Gene A. Taglairini, PhD

Copyright Gene A. Taglairini, PhD Network Features Input nodes are connected to every neuron The “winner” neuron is the one whose weights are most “similar” to the input Neurons participate in a “winner-take-all” behavior The winner output is set to 1 and all others to 0 Only weights to the winner and its neighbors are adapted 11/26/2018 Copyright Gene A. Taglairini, PhD

Copyright Gene A. Taglairini, PhD Neighborhoods Symmetric with maximum value weight changes applied near the center Excitatory near the center, surrounded by an inhibitory band, and may be enclosed by an excitatory influence rapidly decreasing to zero 11/26/2018 Copyright Gene A. Taglairini, PhD

Network Equations: Similarity Similarity is measured using the Euclidean distance from an input pattern vector inputp The vector of weights w[i] represents all the weights to neuron i; hence, w[i][j] is the weight that joins input j to neuron i The winner has the smallest value of s 11/26/2018 Copyright Gene A. Taglairini, PhD

Network Equations: Weight Adaptation Suppose the winning neuron has index a For an input j and neuron i, the weight change Dw[i][j] and the new value of the weight w[i][j] are: Dw[i][j] = h (input[j] – w[i][j] ) NbdWt(a, i) w[i][j] = w[i][j] + Dw[i][j] Where h is the learning rate and NbdWt(a, i) is the neighborhood weighting function 11/26/2018 Copyright Gene A. Taglairini, PhD

Network Equations: Neighborhood Weighting Function The neighborhood weighting function may take forms such as: Where s is a scalar that sets the dilation of the weighting function 11/26/2018 Copyright Gene A. Taglairini, PhD

On-center Off-surround Neighborhood A field that: Reinforces stimuli “near” the center Attenuates effects in a region about the center Abates rapidly outside the attenuating region Contributes to noise control and localizes representation Complementary to off-center on-surround 11/26/2018 Copyright Gene A. Taglairini, PhD

On-center Off-Surround Neighborhood 11/26/2018 Copyright Gene A. Taglairini, PhD

Mexican Hat Neighborhood 11/26/2018 Copyright Gene A. Taglairini, PhD

Basic Training Algorithm Initialize weights from input nodes to neurons Choose a neighborhood function While (input patterns mismatch weights) Find a winner Adapt weights in the vicinity of the winner Develop an interpretation for the encoding—identify which neurons encode what patterns 11/26/2018 Copyright Gene A. Taglairini, PhD

Copyright Gene A. Taglairini, PhD Applications Speech encoding for a phonetic typewriter Geometric pattern coding 11/26/2018 Copyright Gene A. Taglairini, PhD