Download presentation
Published byMyron Francis Modified over 9 years ago
1
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation
2
LVQ Supervised version of SOM Same architecture as SOM
Used for pattern classification Weight vector for an output neuron is a “reference” vector for the class that the neuron represents
3
LVQ – cont. A set of training patterns with known classifications is provided, along with an initial distribution of reference vectors (each of which represents a known classification) After training, LVQ classifies an input vector by assigning it to the same class as the output neuron that has its weight vector closest to the input vector.
4
Procedure of LVQ Same with SOM, except if the winning neuron is “correct” use same weight update: wnew = wold + (x - wold) and if winning neuron is “incorrect” use: wnew = wold - (x - wold)
5
Ways to Initialize the Weight Vectors
Take the first m training vectors and use them as weight vectors; the remaining is used for training Assign the initial weights and classifications randomly Determined by k-means or SOM
6
Example of LVQ
7
Variations of LVQ LVQ2 & LVQ3 [Kohonen, 1990]
Allows two vectors to learn (the winner and a runner-up) LVQ3 uses momentum for learning rate prevent the reference vectors from moving away from their optimal placement
8
Counterpropagation Developed by Hecht-Nielsen, 1987
Used for compressing data, approximating functions or associating patterns Two types – full and forward only
9
Architecture of Forward Only Counterpropagation
x Kohonen neurons z Output Layer w weight matrix y Cluster Layer v weight matrix Input Layer
10
Procedure of the Counterpropagation Net
2 phases of learning Phase 1 – SOM (unsupervised learning) Phase 2 – Grossberg outstar learning (supervised learning)
11
Counterpropagation Net Example
12
Iris Data Set A data set with 150 random samples of flowers from the iris species setosa, versicolor, and virginica. From each species there are 50 observations for sepal length, sepal width, petal length, and petal width in cm. This dataset was used by Fisher (1936) in his initiation of the linear-discriminant-function technique. Has some cool statistical plotting routines using this data.
13
Cool Web Sites (nice write up with some good figures on self organizing maps) (brief write up with downloadable software) (series of slides on the nets) (a bibliography of over 4000 papers using SOM)
14
More Sites (super cool simulation - we will cover this in the optimization lecture) (another cool simulation like above but this time in 3D!) (pretty cool simulation that you change parameters with)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.