Competitive Networks.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Introduction to Neural Networks Computing
Perceptron Learning Rule
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.
Competitive Networks. Outline Hamming Network.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Perceptron Learning Rule
An Illustrative Example
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
1 Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
Neural Networks based on Competition
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Lecture 09 Clustering-based Learning
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Network Unsupervised Learning
Artificial Neural Networks
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
UNSUPERVISED LEARNING NETWORKS
Intro. ANN & Fuzzy Systems Lecture 14. MLP (VI): Model Selection.
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
E v a f i c o u g r á v i d a e, a o d a r à l u z u m m e n i n o, d i s s e : " O S e n h o r m e d e u u m f i l h o h o m e m ". E l a d e u à c.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Big data classification using neural network
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Pertemuan 7 JARINGAN INSTAR DAN OUTSTAR
Dr. Unnikrishnan P.C. Professor, EEE
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Hopfield Network.
Associative Learning.
network of simple neuron-like computing elements
Adaptive Resonance Theory
Competitive Networks.
Adaptive Resonance Theory
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Perceptron Learning Rule
An Illustrative Example.
Perceptron Learning Rule
Associative Learning.
Perceptron Learning Rule
Presentation transcript:

Competitive Networks

Hamming Network

Layer 1 (Correlation) We want the network to recognize the following prototype vectors: The first layer weight matrix and bias vector are given by: W 1 w T 2 ¼ S p Q = b 1 R ¼ = The response of the first layer is: a 1 W p b + T R 2 ¼ Q = The prototype closest to the input vector produces the largest response.

Layer 2 (Competition) The second layer is initialized with the output of the first layer. The neuron with the largest initial condition will win the competiton.

Competitive Layer n W p w 1 T 2 ¼ S L q cos =

Competitive Learning Instar Rule For the competitive network, the winning neuron has an ouput of 1, and the other neurons have an output of 0. Kohonen Rule

Graphical Representation

Example

Four Iterations

Typical Convergence (Clustering) Weights Input Vectors Before Training After Training

Dead Units One problem with competitive learning is that neurons with initial weights far from any input vector may never win. Dead Unit Solution: Add a negative bias to each neuron, and increase the magnitude of the bias as the neuron wins. This will make it harder to win if a neuron has won often. This is called a “conscience.”

Stability If the input vectors don’t fall into nice clusters, then for large learning rates the presentation of each input vector may modify the configuration so that the system will undergo continual evolution. p3 p3 p1 p1 p5 p5 1w(0) 1w(8) p8 p7 p8 p7 2w(8) 2w(0) p6 p6 p2 p2 p4 p4

Competitive Layers in Biology On-Center/Off-Surround Connections for Competition Weights in the competitive layer of the Hamming network: Weights assigned based on distance: w i j , 1 if d = e – > î ï í ì

Mexican-Hat Function

Update weight vectors in a neighborhood of the winning neuron. Feature Maps Update weight vectors in a neighborhood of the winning neuron.

Example

Convergence

Learning Vector Quantization The net input is not computed by taking an inner product of the prototype vectors with the input. Instead, the net input is the negative of the distance between the prototype vectors and the input.

Subclass For the LVQ network, the winning neuron in the first layer indicates the subclass which the input vector belongs to. There may be several different neurons (subclasses) which make up each class. The second layer of the LVQ network combines subclasses into a single class. The columns of W2 represent subclasses, and the rows represent classes. W2 has a single 1 in each column, with the other elements set to zero. The row in which the 1 occurs indicates which class the appropriate subclass belongs to.

Example • Subclasses 1, 3 and 4 belong to class 1. • Subclass 2 belongs to class 2. • Subclasses 5 and 6 belong to class 3. A single-layer competitive network can create convex classification regions. The second layer of the LVQ network can combine the convex regions to create more complex categories.

LVQ Learning LVQ learning combines competive learning with supervision. It requires a training set of examples of proper network behavior. If the input pattern is classified correctly, then move the winning weight toward the input vector according to the Kohonen rule. If the input pattern is classified incorrectly, then move the winning weight away from the input vector.

Example p 2 1 = t , î þ í ý ì ü p 3 1 = t , î þ í ý ì ü

First Iteration a c o m p e t n w a c o m p e t ( ) – è ø ç ÷ æ ö = 1 c o m p e t n ( ) w – 2 3 4 è ø ç ÷ æ ö = a 1 c o m p e t 0.25 0.75 T – 1.00 0.50 è ø ç ÷ æ ö 0.354 0.791 1.25 0.901 =

Second Layer This is the correct class, therefore the weight vector is moved toward the input vector.

Figure

Final Decision Regions

LVQ2 If the winning neuron in the hidden layer incorrectly classifies the current input, we move its weight vector away from the input vector, as before. However, we also adjust the weights of the closest neuron to the input vector that does classify it properly. The weights for this second neuron should be moved toward the input vector. When the network correctly classifies an input vector, the weights of only one neuron are moved toward the input vector. However, if the input vector is incorrectly classified, the weights of two neurons are updated, one weight vector is moved away from the input vector, and the other one is moved toward the input vector. The resulting algorithm is called LVQ2.

LVQ2 Example