Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.

Slides:



Advertisements
Similar presentations
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Advertisements

Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
INTRODUCTION TO Machine Learning 2nd Edition
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Lecture 09 Clustering-based Learning
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Neural networks for data mining Eric Postma MICC-IKAT Universiteit Maastricht.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
-Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授.
Artificial Neural Network Unsupervised Learning
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
381 Self Organization Map Learning without Examples.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Machine Learning 12. Local Models.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Ch10 : Self-Organizing Feature Maps
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Dr. Unnikrishnan P.C. Professor, EEE
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Competitive Networks.
Neural Networks Chapter 5
Competitive Networks.
Self-Organizing Maps (SOM) (§ 5.5)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden

Unsupervised Competitive Learning Competitive learning Winner-take-all units Cluster/Categorize input data Feature mapping

Unsupervised Competitive Learning 321

output input (n-dimensional) winner

Simple Competitive Learning Winner: Lateral inhibition

Simple Competitive Learning Update weights for winning neuron

Simple Competitive Learning Update rule for all neurons:

Graph Bipartioning Patterns: edges = dipole stimuli Two output units

Simple Competitive Learning Dead Unit Problem Solutions –Initialize weights tot samples from the input –Leaky learning: also update the weights of the losers (but with a smaller  ) –Arrange neurons in a geometrical way: update also neighbors –Turn on input patterns gradually –Conscience mechanism –Add noise to input patterns

Vector Quantization Classes are represented by prototype vectors Voronoi tessellation

Learning Vector Quantization Labelled sample data Update rule depends on current classification

Adaptive Resonance Theory Stability-Plasticity Dilemma Supply of neurons, only use them if needed Notion of “sufficiently similar”

Adaptive Resonance Theory Start with all weights = 1 Enable all output units Find winner among enabled units Test match Update weights

Feature Mapping Geometrical arrangement of output units Nearby outputs correspond to nearby input patterns Feature Map Topology preserving map

Self Organizing Map Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i Before learning i w After learning i w

Self Organizing Map Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbors of the winner share the “prize” (The “postcode lottery” principle) After learning, neurons with similar weights tend to cluster on the map

Self Organizing Map

Input: uniformly randomly distributed points Output: Map of 20 2 neurons Training –Starting with a large learning rate and neighborhood size, both are gradually decreased to facilitate convergence

Self Organizing Map

Feature Mapping Retinotopic Map Somatosensory Map Tonotopic Map

Feature Mapping

Kohonen’s Algorithm

Travelling Salesman Problem

Hybrid Learning Schemes unsupervised supervised

Counterpropagation First layer uses standard competitive learning Second (output) layer is trained using delta rule

Radial Basis Functions First layer with normalized Gaussian activation functions