Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.

Similar presentations


Presentation on theme: "Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory."— Presentation transcript:

1 Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory

2 Ming-Feng Yeh2 Objectives There is no guarantee that, as more inputs are applied to the competitive network, the weight matrix will eventually converge. Present a modified type of competitive learning, called adaptive resonance theory (ART), which is designed to overcome the problem of learning stability.

3 Ming-Feng Yeh3 Theory & Examples A key problem of the Grossberg network and the competitive network is that they do NOT always from stable clusters (or categories). The learning instability occurs because of the network’s adaptability (or plasticity), which causes prior learning to be eroded by more recent learning.

4 Ming-Feng Yeh4 Stability / Plasticity How can a system be receptive to significant new patterns and yet remain stable in response to irrelevant patterns? Grossberg and Carpenter developed the ART to address the stability/plasticity dilemma. The ART networks are based on the Grossberg network of Chapter 15.

5 Ming-Feng Yeh5 Key Innovation The key innovation of ART is the use of “expectations.” As each input is presented to the network, it is compared with the prototype vector that is most closely matches (the expectation). If the match between the prototype and the input vector is NOT adequate, a new prototype is selected. In this way, previous learned memories (prototypes) are not eroded by new learning.

6 Ming-Feng Yeh6 Overview Basic ART architecture Grossberg competitive network

7 Ming-Feng Yeh7 Grossberg Network The L1-L2 connections are instars, which performs a clustering (or categorization) operation. When an input pattern is presented, it is multiplied (after normalization) by the L1-L2 weight matrix. A competition is performed at Layer 2 to determine which row of the weight matrix is closest to the input vector. That row is then moved toward the input vector. After learning is complete, each row of the L1-L2 weight matrix is a prototype pattern, which represents a cluster (or a category) of input vectors.

8 Ming-Feng Yeh8 ART Networks -- 1 Learning of ART networks also occurs in a set of feedback connections from Layer 2 to Layer 1. These connections are outstars which perform pattern recall. When a node in Layer 2 is activated, this reproduces a prototype pattern (the expectation) at layer 1. Layer 1 then performs a comparison between the expectation and the input pattern. When the expectation and the input pattern are NOT closely matched, the orienting subsystem causes a reset in Layer 2.

9 Ming-Feng Yeh9 ART Networks -- 2 The reset disables the current winning neuron, and the current expectation is removed. A new competition is then performed in Layer 2, while the previous winning neuron is disable. The new winning neuron in Layer 2 projects a new expectation to Layer 1, through the L2-L1 connections. This process continues until the L2-L1 expectation provides a close enough match to the input pattern.

10 Ming-Feng Yeh10 ART Subsystems Layer 1 Comparison of input pattern and expectation. L1-L2 Connections (Instars) Perform clustering operation. Each row of W 1:2 is a prototype pattern. Layer 2 Competition (Contrast enhancement) L2-L1 Connections (Outstars) Perform pattern recall (Expectation). Each column of W 2:1 is a prototype pattern Orienting Subsystem Causes a reset when expectation does not match input pattern Disables current winning neuron

11 Ming-Feng Yeh11 Layer 1

12 Ming-Feng Yeh12 Layer 1 Operation Equation of operation of Layer 1: Output of Layer 1: Excitatory input: Input pattern + L1-L2 expectation Inhibitory input: Gain control from L2

13 Ming-Feng Yeh13 Excitatory Input to L1 The excitatory input: Assume that the jth neuron in Layer 2 has won the competition, i.e., The excitatory input to Layer 1 is the sum of the input pattern and the L2-L1 expectation.

14 Ming-Feng Yeh14 Inhibitory Input to L1 The inhibitory input – the gain control The inhibitory input to each neuron in Layer 1 is the sum of all of the outputs of Layer 2. The gain control to Layer 1 will be one when Layer 2 is active (one neuron has won the competition), and zero when Layer 2 is inactive (all neurons having zero output).

15 Ming-Feng Yeh15 Steady State Analysis -- 1 The response of neuron i in Layer 1: Case 1: Layer 2 is inactive – each In steady state: If then If then The output of Layer 1 is the same as the input pattern

16 Ming-Feng Yeh16 Steady State Analysis -- 2 Case 2: Layer 2 is active – and In steady state: Layer 1 is to combine the input vector with the expectation from Layer 2. Since both the input and the expectation are binary pattern, we will use a logic AND operation to combine the two vectors. if either or is equal to 0  if both and are equal to 1 

17 Ming-Feng Yeh17 Layer 1 Example Let Assume that Layer 2 is active and neuron 2 of Layer 2 wins the competition.

18 Ming-Feng Yeh18 Response of Layer 1

19 Ming-Feng Yeh19 Layer 2 From the orienting subsystem

20 Ming-Feng Yeh20 Layer 2 Operation Equation of operation of Layer 2: The rows of adaptive weights, after training, will represent the prototype patterns. on-center feedback adaptive instar excitatory input off-surround feedback inhibitory input

21 Ming-Feng Yeh21 Layer 2 Example Let

22 Ming-Feng Yeh22 Response of Layer 2 t

23 Ming-Feng Yeh23 Orienting Subsystem Determine if there is a sufficient match between the L2-L1 expectation ( a 1 ) and the input pattern ( p )

24 Ming-Feng Yeh24 Orienting Subsyst. Operat. Equation of operation of the Orienting Subsystem: excitatory input: inhibitory input: Whenever the excitatory input is larger than the inhibitory input, the Orienting Subsystem will be driven on. excitatory input inhibitory input

25 Ming-Feng Yeh25 Steady State Operation Steady state: Let, then if, or if (vigilance) The condition that will cause a reset of Layer 2.

26 Ming-Feng Yeh26 Vigilance Parameter. The term  is called the vigilance parameter and must fall in the range If  is close to 1, a reset will occur unless is close to If  is close to 0, need not be close to to present a reset., whenever Layer 2 is active. The orienting subsystem will cause a reset when there is enough of a mismatch between and

27 Ming-Feng Yeh27 Orienting Subsystem Ex. Suppose that In this case a reset signal will be sent to Layer 2, since is positive. t

28 Ming-Feng Yeh28 Learning Law Two separate learning laws: one for the L1-L2 connections, (instar) and another for L2-L1 connections (outstar). Both L1-L2 connections and L2-L1 connections are updated at the same time. Whenever the input and the expectation have an adequate match. The process of matching, and subsequent adaptation, is referred to as resonance.

29 Ming-Feng Yeh29 Subset / Superset Dilemma Suppose that, so that the prototype patterns are If the output of Layer 1 is then the input to Layer 2 will be Both prototype vectors have the same inner product with a 1, even though the 1 st prototype is identical to a 1 and the 2 nd prototype is not. This is called subset/superset dilemma.

30 Ming-Feng Yeh30 Subset / Superset Solution One solution to the subset/superset dilemma is to normalize the prototype patterns. The input to Layer 2 will then be The first prototype has the largest inner product with a 1. The first neuron in Layer 2 will be active.

31 Ming-Feng Yeh31 Learning Law: L1-L2 Instar learning with competition: When neuron i of Layer 2 is active, the i th row of,, is moved in the direction of a 1. The learning law is that the elements of compete, and therefore is normalized.

32 Ming-Feng Yeh32 Fast Learning For fast learning, we assume that the outputs of Layer 1 and Layer 2 remain constant until the weights reach steady state. assume that and set Case 1: Case 2: Summary:

33 Ming-Feng Yeh33 Learning Law: L2-L1 Typical outstar learning: If neuron j in Layer 2 is active (has won the competition), then column j of is moved toward a 1. Fast learning: assume that and  Column j of converges to the output of Layer 1, a 1, which is a combination of the input pattern and the appropriate prototype pattern. The prototype pattern is modified to incorporate the current input pattern.

34 Ming-Feng Yeh34 ART1 Algorithm Summary 0. Initialization: The initial is set to all 1’s. Every elements of the initial is set to. 1. Present an input pattern to the network. Since Layer 2 is NOT active on initialization, the output of Layer 1 is. 2. Compute the input to Layer 2,, and activate the neuron in Layer 2 with the largest input In case of tie, the neuron with the smallest index is declared the winner.

35 Ming-Feng Yeh35 Algorithm Summary Cont. 3. Compute the L2-L1 expectation (assume that neuron j of Layer 2 is activated): 4. Layer 2 is active. Adjust the Layer 1 output to include the L2-L1 expectation: 5. Determine the degree of match between the input pattern and the expectation (Orienting Subsystem): 6. If, then set, inhibit it until an adequate match occurs (resonance), and return to step 1. If, then continue with step 7.

36 Ming-Feng Yeh36 Algorithm Summary Cont. 7. Update row j of when resonance has occurred: 8. Update column j of : 9. Remove the input pattern, restore all inhibited neurons in Layer 2, and return to step 1. The input patterns continue to be applied to the network until the weights stabilize (do not change). ART1 network can only be used for binary input patterns.

37 Ming-Feng Yeh37 Solved Problem: P16.5 Train an ART1 network using the parameters and, and choosing (3 categories), and using the following three input vectors: Initial weights: 1-1: Compute the Layer 1 response:

38 Ming-Feng Yeh38 P16.5 Continued 1-2: Compute the input to Layer 2 Since all neurons have the same input, pick the first neuron as winner. 1-3: Compute the L2-L1 expectation

39 Ming-Feng Yeh39 P16.5 Continued 1-4: Adjust the Layer 1 output to include the expectation 1-5: Determine the match degree: Therefore (no reset) 1-6: Since, continued with step 7. 1-7: Resonance has occurred, update row 1 of

40 Ming-Feng Yeh40 P16.5 Continued 1-8: Update column 1 of : 2-1: Compute the new Layer 1 response (Layer 2 inactive): 2-2: Compute the input to Layer 2: Since neurons 2 and 3 have the same input, pick the second neuron as winner:

41 Ming-Feng Yeh41 P16.5 Continued 2-3: Compute the L2-L1 expectation: 2-4: Adjust the Layer 1 output to include the expectation 2-5: Determine the match degree: Therefore (no reset) 2-6: Since, continued with step 7.

42 Ming-Feng Yeh42 P16.5 Continued 2-7: Resonance has occurred, update row 2 of 2-8: Update column 2 of : 3-1: Compute the new Layer 1 response: 3-2: Compute the input to Layer 2:

43 Ming-Feng Yeh43 P16.5 Continued 3-3: Compute the L2-L1 expectation: 3-4: Adjust the Layer 1 output to include the expectation 3-5: Determine the match degree: Therefore (no reset) 3-6: Since, continued with step 7.

44 Ming-Feng Yeh44 P16.5 Continued 3-7: Resonance has occurred, update row 1 of 3-8: Update column 2 of : This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

45 Ming-Feng Yeh45 Solved Problem: P16.6 Repeat Problem P16.5, but change the vigilance parameter to. The training will proceed exactly as in Problem P16.5, until pattern p 3 is presented. 3-1: Compute the Layer 1 response: 3-2: Compute the input to Layer 2:

46 Ming-Feng Yeh46 P16.6 Continued 3-3: Compute the L2-L1 expectation: 3-4: Adjust the Layer 1 output to include the expectation 3-5: Determine the match degree: Therefore (reset) 3-6: Since, set, inhibit it until an adequate match occurs (resonance), and return to step 1.

47 Ming-Feng Yeh47 P16.6 Continued 4-1: Recompute the Layer 1 response: (Layer 2 inactive) 4-2: Compute the input to Layer 2: Since neuron 1 is inhibited, neuron 2 is the winner: 4-3: Compute the L2-L1 expectation: 4-4: Adjust the Layer 1 output to include the expectation

48 Ming-Feng Yeh48 P16.6 Continued 4-5: Determine the match degree: Therefore (reset) 4-6: Since, set, inhibit it until an adequate match occurs (resonance), and return to step 1. 5-1: Recompute the Layer 1 response: 5-2: Compute the input to Layer 2: Since neurons 1 & 2 are inhibited, neuron 3 is the winner:

49 Ming-Feng Yeh49 P16.6 Continued 5-3: Compute the L2-L1 expectation: 5-4: Adjust the Layer 1 output to include the expectation 5-5: Determine the match degree: Therefore (no reset) 5-6: Since, continued with step 7.

50 Ming-Feng Yeh50 P16.6 Continued 5-7: Resonance has occurred, update row 3 of 5-8: Update column 2 of : This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

51 Ming-Feng Yeh51 Solved Problem: P16.7 Train an ART1 network using the following input vectors. Present the vectors in the order p 1 - p 2 - p 3 - p 1 - p 4. Use the parameters and, and choose (three categories). Train the network until the weights have converged. The initial matrix is an matrix of 1’s. The initial matrix is an matrix, with equal to blue square  1; white square  0 5  5 grids  25-dimensional vectors

52 Ming-Feng Yeh52 P16.7 Continued Training sequence: p 1 - p 2 - p 3 - p 1 - p 4  : resonance √: reset


Download ppt "Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory."

Similar presentations


Ads by Google