Unsupervised Networks Closely related to clustering

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Chapter 8 Geocomputation Part B:
Memristor in Learning Neural Networks
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Simple Neural Nets For Pattern Classification
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Neural Networks based on Competition
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Radial Basis Function Networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Artificial Neural Network Unsupervised Learning
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
Hebbian Coincidence Learning
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
Unsupervised Learning
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Machine Learning Supervised Learning Classification and Regression
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Radial Basis Function G.Anuradha.
Other Applications of Energy Minimzation
Lecture 22 Clustering (3).
Self-Organizing Maps Corby Ziesman March 21, 2007.
Neuro-Computing Lecture 4 Radial Basis Function Network
Competitive Networks.
Competitive Networks.
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Perceptron Learning Rule
Perceptron Learning Rule
Perceptron Learning Rule
Presentation transcript:

Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a two-dimensional grid of neurons Neighbourhood relations can be explicitly maintained, or each neuron can have lateral connections to its ‘neighbours’ Multi-dimensional data can be mapped onto a two-dimensional surface Facilitates representation of clusters in the data

Output (Single Node Fires) Kohonen Layer (2D Grid with Lateral Connections) Only three connections shown for clarity Adjustable Weights Input Layer Input Signals (External Stimuli)

j j u ) Vector passed through input neurons New Input Vector E E=[e1,e2,....en] n elements Vector passed through input neurons Weight Vector, U, between the input and each Kohonen layer neuron i is the neuron in the Kohonen layer Ui=[ui1,ui2,...uin] Each Kohonen layer neuron produces a value Ed=|| E - Ui || uij is the weight between input j and Kohonen neuron i Euclidean distance, Ed, of neuron in the data space from the original vector 2 Ed=  ( e  u ) j ij j

Begins with a random initialisation of the weights between the input and Kohonen layers Each training vector is presented to the network The winning neuron is found Plus the winning neuron’s neighbours are identified

Weights for the winning neuron and its neighbours are updated, so that they move closer to the input vector The change in weights is calculated as follows:  - is a learning rate parameter

Only the weights on connections to the winning neuron and its neighbours are updated The weights are updated as follows: Both the learning rate and the neighbourhood size decay during training

The learning rate is usually set to a relatively high value, such as 0.5, and is decreased as follows: t= 0 (1 - (t/T)) T - total number of training iterations t - is the current training iteration t - is the learning rate for the current training iteration 0 - is the initial value of the learning rate

The neighbourhood size is also decreased iteratively Initialise to take in approx. half the layer Neighbourhood size is reduced linearly at each epoch

The Kohonen layer unit with the lowest Euclidean distance, i.e. the unit closest to the original input vector, is chosen,as follows: || E - Uc || = min{|| E -Ui ||} i c denotes the ‘winning’ neuron in the Kohonen layer The ‘winning’ neuron is considered as the output of the network -“winner takes all”

An interpolation algorithm is used so that the neuron with the lowest distance fires with a high value, and a pre-determined number of other neurons which are the next closest to the data fire with lower values

Unsupervised architecture Requires no target output vectors Simply organises itself into the best representation for the data used in training

Provides no information other than identifying where in the data space a particular vector lies Therefore interpretation of this information must be made Interpretation process can be time-consuming and requires data for which the classification is known

Kohonen network representing ‘Normal’ space Fault data falling outside ‘Normal’ space

Class labels can be applied if data is labelled Use nearest-neighbour or voting strategies Nearest Neighbour - Set class label to most common label of K nearest training cases Voting - Identify all cases that are “assigned to” that neuron, and assign most common class

If labelled data is available, it can be used to improve the distribution of neurons Move neurons towards correctly-classified cases Move away from incorrectly-classified

Unsupervised learning requires no class labelling of data Discover clusters (and then possibly label) Visualisation Novelty detection