Feature mapping: Self-organizing Maps

Slides:



Advertisements
Similar presentations
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Advertisements

Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
Kohonen Self Organising Maps Michael J. Watts
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
PMR5406 Redes Neurais e Lógica Fuzzy
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Neural Networks based on Competition
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
WK6 – Self-Organising Networks:
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Projection methods in chemistry Autumn 2011 By: Atefe Malek.khatabi M. Daszykowski, B. Walczak, D.L. Massart* Chemometrics and Intelligent Laboratory.
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Community Architectures for Network Information Systems
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
Self-Organizing Maps Corby Ziesman March 21, 2007.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
Review for final exam 2015 Fundamentals of ANN RBF-ANN using clustering Bayesian decision theory Genetic algorithm SOM SVM.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Machine Learning 12. Local Models.
Big data classification using neural network
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Unsupervised Learning and Neural Networks
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Self-Organizing Maps Corby Ziesman March 21, 2007.
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Introduction to Cluster Analysis
Review for test #3 Radial basis functions SVM SOM.
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Feature mapping: Self-organizing Maps Unsupervised learning designed to achieve dimensionality reduction by topographically ordering of input patterns Basic aim of SOM is similar to data compression Given a large set of input vectors find a smaller set of prototypes that provide a good approximation to the whole input data space

Structure of SOM neural network: input nodes connected directly to a 2D lattice of output nodes wk = weight vector connecting input nodes to output node k

Creating prototypes If input space contains many instances like x, i(x) will be the topographical location of this feature in the output space. wi will become a prototype of instances of this feature

Initialize weights on all connections to small random numbers SOM algorithm Initialize weights on all connections to small random numbers 3 aspects of training: Output-node competition based on a discriminant function Winning node becomes center of a cooperative neighborhood Neighborhoods adapt to input patterns because cooperation increases the susceptible to large values of the discriminant function

Competitive learning Given randomly selected input vector x, best-matching (winning) output node i(x) = arg mink ||x – wk|| k = 1, 2, …L L = # of output nodes in lattice, all assumed to have same bias wk = weight vector connecting input nodes to output node k Equivalent to saying wi has smallest dot product with x The output node with current weight vector most like the randomly selected input vector is the winner

di,k is the lattice separation between k and i(x) Gaussian cooperative neighborhood Probability that output nodes k belongs to the neighborhood of winning output node i(x) di,k is the lattice separation between k and i(x) s(n) = s0 exp(-n/n0) Initially neighborhood is large. Decreases on successive iterations.

Adaptation Applied to all output nodes in the neighborhood of winning node k Makes the winning node more like x Leaning rate h(n) = h0 exp(-n/n0)

Colorful illustration of SOM performance Randomly colored pixels become bands of color prototypes have developed in neighborhoods of output nodes ~ size of lattice s~nearest neighbors

SOM as an elastic net covering input space Each wi points to a prototypical instance in input space. connect these points to reflect horizontal and vertical lines in the lattice get a net in input space with nodes that are the compressed representation of input space

Twist-free topographical ordering: all rectangles, no butterflies Low twist index facilitates interpretation

Refinement distorts the net to reflect input statistics Input distribution Initial weights Ordering phase Refined map

Contextual map: Classification by SOM SOM training should produce coherent regions in the output lattice where weight vectors are prototypes of the attributes of distinct classes in the input A contextual map labels these regions based on responses of the lattice to “test patterns”, labeled instances, not used in training that characterize a class

Example: SOM classification of animals 13 attributes of 16 animals 10 x 10 lattice, 2000 iterations of SOM algorithm

Input vectors for training are concatenation of label (16-component Boolean vector to identify animal) and attributes (13-component Boolean vector) Test patterns are concatenation of label with 13-component null vector

Lattice sites with strongest response for each animal type prey predators birds

Semantic map: All lattice sites label by animal type inducing the strongest response

Semantic maps resemble maps of the brain that pinpoint area of strong response to specific inputs

Unified distance matrix (u-matrix or UMAT) Standard approach for clustering applications of SOM If SOM is twist-free, Euclidian distance between the prototype vectors of neighboring output nodes approximates the distance between different parts of the underlying input data space. UMAT usually displayed as heat map with darker colors for larger distance Clusters denoted by groups of light colors; darker colors show boundaries between the clusters

UMAT of famous animal-clustering SOM by Kohonen birds mammals

Fine structure revealed by enhanced heat map Stars show local minima of distance matrixes and connectedness of clusters Each output node is connect to one and only one local minimum