Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 14 Competitive Networks Ming-Feng Yeh.

Similar presentations


Presentation on theme: "CHAPTER 14 Competitive Networks Ming-Feng Yeh."— Presentation transcript:

1 CHAPTER 14 Competitive Networks Ming-Feng Yeh

2 Objectives Discuss networks that are very similar in structure and operation to Hamming network. They use the associative learning rule to adaptively learn to classify pattern. Three such networks are introduced in this chapter: the competitive network, the self-organizing feature map (SOFM) network and the learning vector quantization (LVQ) network. Ming-Feng Yeh

3 Hamming Network The first layer performs a correlation between the input vector and the prototype vector. The second layer performs a competition to determine a winner which of the prototype vectors is closest to the input vector. Ming-Feng Yeh

4 Layer 1: Correlation The prototype vector: {p1, p2, …, pQ}
The weight matrix and the bias vector for Layer 1: The output of the first layer: These inner products indicate how close each of the prototype patterns is to the input pattern. Ming-Feng Yeh

5 Layer 2: Competition The second layer is initialized with the output of the first layer. The neurons compete with each other to determine a winner The neuron with the largest initial value will win the competition.  winner-take-all Lateral Inhibition: excites itself and inhibits all the other neurons Ming-Feng Yeh

6 Competitive Layer A recurrent competitive layer: (assuming vectors have normalized length of L) Ming-Feng Yeh

7 Competitive Learning Instar rule:
Train the weights in a competitive network, without knowing the prototype vectors. For the competitive network, a is nonzero (i.e., 1) for the winning neuron (i = i*). Kohonen rule: Ming-Feng Yeh

8 Example: Hamming Network
Input vectors After p2 is presented Final weights & initial weights Each weigh vector will point at a different cluster of input vectors and become a prototype for a different cluster. Once the network has learned to cluster the input vectors, it will classify new vectors accordingly. Ming-Feng Yeh

9 Prob. with Compet. Layer The First Problem:
The choice of learning rate forces a trade-off between the speed of learning rate and the stability of the final weight vectors. Initial training can be done with a large learning rate for fast learning. Then the learning rate can be decreased as training progressed, to achieve stable prototype vectors. Ming-Feng Yeh

10 Prob. with Compet. Layer The Second Problem:
A more serious stability problem occurs when clusters are close together. Input vector: blue star; Order: (a)  (b)  (c) Two input vectors in (a) are presented several times. The final weight vectors are as (b), and so on. Resulting in unstabe learning. Ming-Feng Yeh

11 Prob. with Compet. Layer The Third Problem:
A neuron’s initial weight vector is located so far from any input vector that it never wins the competition, and therefore never learns. Resulting in a “dead” neuron, which does nothing useful. Ming-Feng Yeh

12 Prob. with Compet. Layer The Forth Problem:
When the number of clusters is not known in advance, the competitive layer may not acceptable for applications. The Fifth Problem: Competitive layers cannot form classes with nonconvex regions or classes that are the union of unconnected regions. Ming-Feng Yeh

13 Compet. Layers in Biology
On-center/off-surround connection The weights in layer 2 of the Hamming network: In terms of the distances: Each neuron reinforces itself (center), while inhibiting all other neurons (surround). Ming-Feng Yeh

14 Mexican-Hat Function In biology, a neuron reinforces not only itself, but also those neurons close to it. Typically, the transition from reinforcement to inhibition occurs smoothly as the distance between neurons increases. Ming-Feng Yeh

15 Neighborhood The neighborhood Ni(d) contains the indices for all the neurons that lie within a radius d of the neuron i. Ming-Feng Yeh

16 Self-Organizing Feature Map
Determine the winning neuron i* using the same procedure as the competitive layer Update the weight vectors using the Kohonen rule: 2D topology initial weight vectors Ming-Feng Yeh

17 Feature Map Training 250 iterations per diagram Ming-Feng Yeh

18 Other Training Examples
Ming-Feng Yeh

19 Improving Feature Maps
To speed up the self-organizing process and to make it more reliable Gradually reduce the size of the neighborhoods during training until it only covers the winning neuron. Gradually decrease the learning rate asymptotically toward 0 during training. The winning neuron uses a larger learning rate than the neighboring neurons. Ming-Feng Yeh

20 Learning Vector Quantization
LVQ network is a hybrid network. It uses both unsupervised and supervised learning to form classifications. Ming-Feng Yeh

21 LVQ: Competitive Layer
Each neuron in the 1st layer is assigned to a class, with several neurons often assigned to the same class. Each class is then assigned to one neuron in the 2nd layer. Negative distance Ming-Feng Yeh

22 Subclass In the competitive network, the neuron with the nonzero output indicates which class the input vector belongs to. For the LVQ network, the winning neuron in the first layer indicates a subclass, rather than a class. There may be several different neurons (subclasses) that make up each class. Ming-Feng Yeh

23 LVQ: Linear Layer The 2nd layer of the LVQ network is used to combine subclasses into a single class. The columns of W2 represent subclasses, and the rows represent classes. W2 has a single 1 in each column, with the other elements set to zero subclass i is a part of class k Subclasses 1 & 3 belong to class 1 Subclasses 2 & 5 belong to class 2 Subclass 4 belongs to class 3 Ming-Feng Yeh

24 Complex/Convex Boundary
A standard competitive layer has the limitation that it can only create decision regions that are convex. The process of combining subclasses to form a class allows the LVQ network to create complex class boundaries. Ming-Feng Yeh

25 LVQ Learning The learning in the LVQ network combines competitive learning with supervision. Assignment of W2: If hidden neuron i is to be assigned to class k, then set If p is classified correctly, move the weights of the winning hidden neuron toward p. If p is classified incorrectly, move the weights of the winning hidden neuron away p. Ming-Feng Yeh

26 Example: LVQ Classification problem: Ming-Feng Yeh

27 Training Process Present p3 to the network: Ming-Feng Yeh

28 After 1st & Many Iterations
Ming-Feng Yeh

29 Improving LVQ networks
First, as with competitive layers, occasionally a hidden neuron may be a dead neuron. Secondly, depending on how initial weight vectors are arrange, a neuron’s weight vector may have to travel through a region of a class that it doesn’t represent, to get to a region that it does represent. The second problem can be solved by applying the modification to the Kohonen rule (LVQ2). Ming-Feng Yeh

30 LVQ2 When the network correctly classifies an input vector, the weights of only one neuron are moved toward the input vector. If the input vector is incorrectly classified, the weights of two neurons are updated, one weight vector is move away from the input vector, and the other one is moved toward the input vector. Ming-Feng Yeh


Download ppt "CHAPTER 14 Competitive Networks Ming-Feng Yeh."

Similar presentations


Ads by Google