Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
UNIVERSITY OF JYVÄSKYLÄ Yevgeniy Ivanchenko Yevgeniy Ivanchenko University of Jyväskylä
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Neural Networks based on Competition
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
A Hybrid Self-Organizing Neural Gas Network James Graham and Janusz Starzyk School of EECS, Ohio University Stocker Center, Athens, OH USA IEEE World.
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Hearing Part 2. Tuning Curve Sensitivity of a single sensory neuron to a particular frequency of sound Two mechanisms for fine tuning of sensory neurons,
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Externally growing self-organizing maps and its application to database visualization and exploration.
381 Self Organization Map Learning without Examples.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
Self-Organizing Maps (SOM) (§ 5.5)
Lecture 14, CS5671 Clustering Algorithms Density based clustering Self organizing feature maps Grid based clustering Markov clustering.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning and Neural Networks
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Competitive Networks.
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within a cluster will be judged as being similar. Cluster algorithms form groups referred to as clusters and the arrangement of clusters should reflect two properties: –Patterns within a cluster should be similar in some way. –Clusters that are similar in some way should be close together.

The Self-organizing Feature Map The input units correspond in number to the dimension of the training vectors. The output units act as prototypes. This network has three inputs and five cluster units. Each unit in the input layer is connected to every unit in the cluster layer.

How does a cluster unit act as a prototype? w 2,1 w 1,1 x1x1 x2x2 p1p1 x 2 coordinatex 1 coordinate w 1,1 w 2,1 The input layer takes coordinates from the input space. The weights adapt during training, and when learning is complete each cluster unit will have a position in the input space determined by weights.

Radius 1 Radius 2 Winning unit, only one to update if radius=0 Units to be updated when radius=1 Units to be updated when radius=2 The topology determines which units fall within a radius of the winning unit. The winning unit will adapt its weights in a way that moves that cluster unit even closer to the training vector. Weight updates within a radius

Algorithm initialize weights to random values. while not HALT for each input vector for each cluster unit calculate the distance from the training vector find unit j with the minimum distance update all weight vectors for units within the radius according to check to see if the learning rate or radius needs updating check HALT.

Self-Organizing Maps –On each training input, all output nodes that are within a topological distance, d T, of D from the winner node will have their incoming weights modified. d T (y i,y j ) = # nodes that must be traversed in the output layer in moving between output nodes y i and y j. –D is typically decreased as training proceeds. Fully Interconnected Input Output Partially Intraconnected

Example An SOFM has a 10x10 two-dimensional grid layout and the radius is initially set to 6. Find how many units will be updated after 1000 epochs if the winning unit is located in the extreme bottom right-hand corner of the grid and the radius is updated according to r = r-1 if current_epoch mod 200 = 0 Assume that epoch numbering starts at 1.

Example 2

Example 3

Figure 3.16 The weight vector positions converted to (x,y) coordinates after 5000 cycles through all the training data.

Example 3 Figure 3.18 The weight vector positions converted to (x,y) coordinates after cycles through all the training data. Figure 3.17 The weight vector positions converted to (x,y) coordinates after cycles through all the training data.

There Goes The Neighborhood D = 1 D = 2 D = 3 As the training period progresses, gradually decrease D. Over time, islands form in which the center represents the centroid C of a set of input vectors, S, while nearby neighbors represent slight variations on C and more distant neighbors are major variations. These neighbors may only win on a few (or no) input vectors, while the island center will win on many of the elements of S.

Self Organization In the beginning, the Euclidian distance d E (y l,y k ) and Topological distance d T (y l,y k ) between output nodes y l and y k will not be related. But during the course of training, they will become positively correlated: Neighbor nodes in the topology will have similar weight vectors, and topologically distant nodes will have very different weight vectors. Euclidean Neighbor Emergent Structure of Output Layer BeforeAfter Topological Neighbor

Example 4 An SOFM network with three inputs and two cluster units is to be trained using the four training vectors: [ ], [ ], [ ], [ ] and initial weights The initial radius is 0 and the learning rate is 0.5. Calculate the weight changes during the first cycle through the data, taking the training vectors in the given order. weights to the first cluster unit

Solution The euclidian distance of the input vector 1 to cluster unit 1 is: The euclidian distance of the input vector 1 to cluster unit 2 is: Input vector 1 is closest to cluster unit 1 so update weights to cluster unit 1:

Solution The euclidian distance of the input vector 2 to cluster unit 1 is: The euclidian distance of the input vector 2 to cluster unit 2 is: Input vector 2 is closest to cluster unit 1 so update weights to cluster unit 1 again: Repeat the same update procedure for input vector 3 and 4 also.

The angle between vectors as the measure of similarity The dot product which is sometimes called the inner product or scalar product, of two vectors v and w is given as follows: p1p1 p2p2 a The angle between non-zero vectors v and w is:

Weight update for dot product as measure of similarity The winning prototype index for an input vector x: index(x)=max p j.x for all j The weight updates are given by:

Self-Organized Maps for Robot Navigation Owen & Nehmzow (1998) Task: Autonomous robot navigation in a laboratory Goals: 1. Find a useful internal representation (i.e. map) that supports an intelligent choice of actions for the given sensory inputs 2. Let the robot build/learn the map itself - Saves the user from specifying it. - Allows the robot to handle new environments. - By learning the map in a noisy, real-world situation, the robot will be more apt to handle other noisy environments. Approach: Use an SOM to organize situation-action vectors. The emerging structure of the SOM then constitutes the robot’s functional internal representation of both the outside world and the appropriate actions to take in different regions of that world.

The Training Phase R 1. Record Sensory Info “Turn Right & Slow Down 2. Get correct actions 3. Input Vector = Sensory Inputs & Actions Input Output 4. Run SOM on Input Vector 5. Update Winner & Neighbors

The Testing Phase R 1. Record Sensory Info 2. Input Vector = Sensory Inputs & No Actions Input Output 3. Run SOM on Input Vector 4. Read Recommended Actions from the Winner’s Weight Vector A

Clustering of Perceptual Signatures The general closeness of successive winners shows a correlation between points & distances in the objective world and the robot’s functional view of that world. Note: A trace of the robot’s path on a map of the real world (i.e. lab floor) would have ONLY short moves. The sequence of winner nodes during the testing phase of a typical navigation task.

SOM for Navigation Summary SOM Regions = Perceptual Landmarks = Sets of similar perceptual patterns Navigation = Association of actions with perceptual landmarks Behavior is controlled by the robot’s subjective functional interpretation of the world, which may abstract the world into a few key perceptual categories. No extensive objective map of the entire environment is required. Useful maps are user & task centered. Robustness (Fault Tolerance): The robot also navigates successfully when a few of its sensors are damaged => The SOM has generalized from the specific training instances. Similar neuronal organizations, with correlations between points in the visual field and neurons in a brain region, are found in many animals.

Brain Maps

Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.

Tonotopic Maps in the Auditory System Spiral Ganglion Ventral Cochlear Nucleus Superior Olive Inferior Colliculus MGN Auditory Cortex Cochlea (Inner Ear) 20 kHz 4 kHz 1 kHz 10 kHz Cochlea Sp. Gang. Cochlear Nucleus Source Localization Via Delay Lines 20 kHz4 kHz1 kHz10 kHz Tonotopy preserved through all 7 levels of processing

Brain Facts, pg. 14, NeuroScience Society

Source Localization using Delay Lines Source Location Detection Neurons: Need 2 simultaneous inputs to fire Transformer Neurons: Convert sound frequency into a neural firing pattern, which is phase-locked to the sound waves (although lower freq). Right Ear Left Ear Left 90 o Left 45 o Right 90 o Right 45 o Straight Ahead! Owls have different ear heights, so they can use the same mechanism for horizontal and vertical localization Topological Map = Similar dirs detected by neighboring cells.

Occular Dominance & Orientation Columns Right eyeLeft eyeRight eyeLeft eye Layers 5 & 6 of V1 Neural Response (Firing rate) 0o0o 90 o -90 o 2 Topological Maps Cells respond to lines at particular angles, and nearby cells respond to similar angles. Regions of cells respond to the same eye. Retina LGN Orientation Angle