Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Sensory system tuning (filtering) and organization All sensory systems are designed to extract information from the environment Sensory systems are usually.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Organizing a spectral image database by using Self-Organizing Maps Research Seminar Oili Kohonen.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
A U S T R A L I A ’ S I N T E R N A T I O N A L U N I V E R S I T Y
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
1 Study of Topographic and Equiprobable Mapping with Clustering for Fault Classification Ashish Babbar EE645 Final Project.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial-Basis Function Networks
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Self-Organizing Maps (SOM) (§ 5.5)
Lecture 14, CS5671 Clustering Algorithms Density based clustering Self organizing feature maps Grid based clustering Markov clustering.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Ch10 : Self-Organizing Feature Maps
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Unsupervised learning
Lecture 22 Clustering (3).
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
A Neural Net For Terrain Classification
Presentation transcript:

Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts of the sensory systems. Topographic maps can be distorted in the sense that the amount of neurons involved is more re- lated to the importance of the task performed, than to the size of the region of the body surface that provides the input signals. Biological inspiration Brain makes maps of sensory input 29-Apr-19 Rudolf Mak TU/e Computer Science

Brain Maps A part of the brain that contains many topographic maps is the cerebral cortex. Some of these are: Visual cortex Various maps, such as retinotopic map Somatosensory cortex Somatotopic map Auditory cortex Tonotopic map Sensors: pick up energy from environment) Taste (chemical) Smell (chemical) Vision (photoelectric) Hearing (mechanical, vibrations) Touch (mechanical, thermal) Skin (2D surface) cortex (2D surface) (first approximation) 29-Apr-19 Rudolf Mak TU/e Computer Science

Somatotopic Map The somatosensory cortex processes the information of the sen- sory neurons that lie below the skin. Note that both the skin and the somatosensory cortex can be seen as two-dimensional spaces Note the large part devoted to lips and hands Cross section view 29-Apr-19 Rudolf Mak TU/e Computer Science

Somatosensory Man Picture of the male body with the body parts scaled accor- ding to the area de- voted to these parts in the somatosenso- ry cortex Man toolmaker?! 29-Apr-19 Rudolf Mak TU/e Computer Science

Unsupervised Selforganizing Learning The neurons are arranged in some grid of fixed topology The winning neuron is the neuron with its weight vector nearest to the supplied input vector In principle all neurons are allowed to change their weight The amount of change of a neuron, however, depends on the distance (in the grid) of that neuron to the winning neuron. Larger distance implies smaller change. Map requires neurons to be positioned in a grid Should be a notion of neighborhood proximity 29-Apr-19 Rudolf Mak TU/e Computer Science

Grid Topologies The following topologies are frequently used One-dimensional grids Line Ring Two-dimensional grids Square grid Torus Hexagonal grid If additional knowledge of the input space is avail- able more sophisticated topologies can be used. 29-Apr-19 Rudolf Mak TU/e Computer Science

Neighborhoods & box distance Square and hexagonal grid with neighborhoods based on box distance Grid-lines are not shown 29-Apr-19 Rudolf Mak TU/e Computer Science

Manhattan or Link Distance 4 3 2 1 Distance to the cen- tral cell measured in number of links 29-Apr-19 Rudolf Mak TU/e Computer Science

Euclidean Distance 2 1 29-Apr-19 Rudolf Mak TU/e Computer Science

Topologically Correct Maps The aim of unsupervised self-organizing learning is to construct a topologically correct map of the input space. For any two neurons i and j in the grid, let d(i,j) be their fixed distance in the grid. A mapping is called topological correct when 29-Apr-19 Rudolf Mak TU/e Computer Science

Neighborhood Functions The allowed weight change of neuron j when i is the winning neuron is given by the neighbor- hood function h(i, j). Common choices are: (Winner takes it all) Beta large h(i,j) ~ 1 large neighborhood Beta small, small neighborhood Simple competitive learning is a special case 29-Apr-19 Rudolf Mak TU/e Computer Science

Unsupervised Self-organizing Learning (incremental version) 29-Apr-19 Rudolf Mak TU/e Computer Science

Unsupervised Self-organizing Learning (batch version) 29-Apr-19 Rudolf Mak TU/e Computer Science

Error Function For a network with weight matrix W and training set we define the error function E(W) by Let , then 29-Apr-19 Rudolf Mak TU/e Computer Science

Gradients of the Error functions Because It follows that the gradient of the error is given by 29-Apr-19 Rudolf Mak TU/e Computer Science

Tuning the Learning Process The learning process usually consists of two phases: A phase in which the weight vectors reorder and become disentangled. In this phase neigh-borhoods (b) must be large. A phase in which the weight vectors are fine- tuned to the part of the training set for which they are the respective winners. In this phase the neighborhoods (b) must be small to avoid interference from other neurons. 29-Apr-19 Rudolf Mak TU/e Computer Science

Phonotopic Map Input vectors are 15 dimensional speech samples from the Finnish language Each vector component represents the average output power over 10ms interval in a certain range of the spectrum (200 Hz – 6400 Hz) Neurons are organized in a 8x12 hexagonal grid After formation of the map, the individual neurons were calibrated to represent phonemes The resulting map is called the phonetic typewriter 29-Apr-19 Rudolf Mak TU/e Computer Science

Phonetic Typewriter The phonetic typewriter is constructed by Tuevo Kohonen, see e.g. his book “Self-Organizing Maps”, Springer, 1995. 29-Apr-19 Rudolf Mak TU/e Computer Science

Travelling Salesman Problem TSP is one of the notorious difficult (NP-Complete) combinatorial optimization problems. The so-called elastic net method can be used to (approximately) solve the Euclidean version of this problem (Durbin and Willshaw). To that end one uses a SOM in which the neurons are arranged in a one-dimensional cycle. http://www.patol.com/java/TSP/index.html 29-Apr-19 Rudolf Mak TU/e Computer Science

Space-filling curves More space filling curves 29-Apr-19 Rudolf Mak TU/e Computer Science