Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Self Organization of a Massive Document Collection
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Outcomes  Look at the theory of self-organisation.  Other self-organising networks  Look at examples of neural network applications.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Introduction to Neural Networks Simon Durrant Quantitative Methods December 15th.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Radial Basis Function (RBF) Networks
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
NEURAL NETWORKS FOR DATA MINING
Machine Learning Neural Networks (2). Multi-layers Network Let the network of 3 layers – Input layer – Hidden layer – Output layer Each layer has different.
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
UNSUPERVISED LEARNING NETWORKS
Unsupervised Learning
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Real Neurons Cell structures Cell body Dendrites Axon
Unsupervised Learning Networks
Self organizing networks
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Competitive Networks.
Competitive Networks.
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
The Network Approach: Mind as a Web
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Unsupervised learning

Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation algorithm can be improved by changing various parameters and re-training.

Unsupervised learning Supervised learning = 'teacher' presents input patterns and desired target result. Unsupervised learning = input patterns but no 'teaching signal'. Self organisation = showing patterns to be classified, network produces own output representation.

Three properties required Value of output used as measure of similarity between input pattern and pattern stored in neuron. Competitive learning strategy selects neuron with largest response. Method of reinforcing largest response.

Self-organising maps (SOMs) Inspiration from Biology: In auditory pathway nerve cells arranged in relation to frequency response (tonotopic organisation). Kohonen took inspiration from to produce self- organising maps (SOMs). In SOM units located physically next to one another will respond to input vectors that are ‘similar’.

SOMs Useful, as difficult for Humans to visualise when data has > 3 dimensions. Large dimensional input vectors 'projected down' onto 2-D map in way maintaining natural order similarity. SOM is 2-D array of neurons, all inputs arriving at all neurons (See Fig.).

SOMs Initially each neuron has own set of (random) weights. When input arrives neuron with pattern of weights most similar to input gives largest response.

SOMs Positive excitatory feedback between SOM unit and nearest neighbours. Causes all the units in ‘neighbourhood’ of winner unit to learn. As distance from winning unit increases degree of excitation falls until it becomes inhibition. Bubble of activity (neighbourhood) around unit with largest net input (Mexican-Hat function, See Fig.).

SOMs Initially each weight set to random number. Euclidean distance D used to find difference between input vectors and weights of SOM units (D = square root of the sum of the squared differences) =

SOMs For a 2-dimensional problem, the distance calculated in each neuron is:

Input vector simultaneously compared to all elements in network, one with lowest D is winner. Update weights all in neighbourhood around winning unit. As learning proceeds size of neighbourhood diminished until has only a single unit. If winner is ‘c’, neighbourhood defined as being Mexican Hat function around ‘c’ (see Fig.).

SOMs Weights of units are adjusted using:  wij = k(xi – wij )Yj Where Y j from Mexican Hat function (controlled by N c )

SOMs k is a value which changes over time (high at start of training, low later on). If unit lies within the neighbourhood of winning unit its weight changed by difference between its weight vector and vector x multiplied by time factor k and function Yj. Each weight vector being updated rotates slightly toward input vector x.

Two distinct phases in training Initial ordering phase: units find correct topological order (might take 1000 iterations where k decreases from 0.9 to 0.01, Nc decreases l from ½ diameter of the network to 1 unit. Final convergence phase: accuracy of weights improves. (k may decrease from 0.01 to 0 while Nc stays at 1 unit. Phase could be 10 to 100 times longer depending on desired accuracy.

Examples In notes: 2-D array of elements arranged in square to map rectangular 2-D coordinate space onto array where units learn to recognise their relative positions in two-dimensional space. Mapping world poverty (shown on video). Credit card fraud detection.

SOMs Possible to identify which regions belong to which class by showing network known patterns seeing which areas active.

Feature map classifier Has an additional layer(s) of units that form output layer, can be trained by several methods (including backpropagation) to produce particular output given particular pattern of activation on SOM (see Fig.).

Neural phonetic typewriter (1986) Can transcribe speech into written text from unlimited (Finnish) vocabulary in real time. Accuracy 92-97%. 2-D array of units trained using 15-D inputs from pre-processed speech. Units in the 2-dimensional array are allowed to organise themselves in response to the input vectors. After training SOM calibrated using spectra of phonemes as inputs.

Neural phonetic typewriter (1986) After training SOM calibrated using spectra of phonemes as inputs. Path across network results in phonetic transcription of the word. This used as input to rule-based system to be compared with known words.

Summary Defined unsupervised learning, where no external ‘teacher’ is present. Discussed a self-organizing neural network called a Self-Organising Map (SOM). SOM uses unsupervised learning to physically arrange its neurons so that the patterns that it stores are arranged such that similar patterns are close to each other and dissimilar patterns are far apart.