Download presentation
Presentation is loading. Please wait.
1
Introduction to Neural Networks John Paxton Montana State University Summer 2003
2
Chapter 5: Adaptive Resonance Theory 1987, Carpenter and Grossberg ART1: clusters binary vectors ART2: clusters continuous vectors
3
General Weights on a cluster unit can be considered to be a prototype pattern Relative similarity is used instead of an absolute difference. Thus, a difference of 1 in a vector with only a few non-zero components becomes more significant.
4
General Training examples may be presented several times. Training examples may be presented in any order. An example might change clusters. Nets are stable (patterns don’t oscillate). Nets are plastic (examples can be added).
5
Architecture Input layer (x i ) Output layer or cluster layer – competitive (y i ) Units in the output layer can be active, inactive, or inhibited.
6
Sample Network t (top down weight), b (bottom up weight) xnxn x1x1 ymym y1y1 b nm t 11
7
Nomenclature b ij : bottom up weight t ij : top down weight s: input vector x: activation vector n: number of components in input vector m: maximum number of clusters || x ||: x i p: vigilance parameter
8
Training Algorithm 1.L > 1, 0 < p <= 1 t ji (0) = 1 0 < b ij (0) < L / (L – 1 + n) 2.while stopping criterion is false do steps 3 – 12 3.for each training example do steps 4 - 12
9
Training Algorithm 4.y i = 0 5.compute || s || 6.x i = s i 7.if y j (do for each j) is not inhibited then y j = b ij x i 8.find largest y j that is not inhibited 9.x i = s i * t ji
10
Training Algorithm 10.compute || x || 11.if || x || / || s || < p then y j = -1, go to step 8 12.b ij = L x i / ( L – 1 + || x || ) t ji = x i
11
Possible Stopping Criterion No weight changes. Maximum number of epochs reached.
12
What Happens If All Units Are Inhibited? Lower p. Add a cluster unit. Throw out the current input as an outlier.
13
Example n = 4 m = 3 p = 0.4 (low vigilance) L = 2 b ij (0) = 1/(1 + n) = 0.2 t ji (0) = 1 x2x2 x3x3 x4x4 x1x1 y3y3 y2y2 y1y1
14
Example 3.input vector (1 1 0 0) 4.y i = 0 5.|| s || = 2 6.x = (1 1 0 0) 7.y 1 =.2(1) +.2(1) +.2(0) +.2(0) = 0.4 y 2 = y 3 = y 4 = 0.4
15
Example 8.j = 1 (use lowest index to break ties) 9.x 1 = s 1 * t 11 = 1 * 1 = 1 x 2 = s 2 * t 12 = 1 * 1 = 1 x 3 = s 3 * t 13 = 0 * 1 = 0 x 4 = s 4 * t 14 = 0 * 1 = 0 10.|| x || = 2 11.|| x || / || s || = 1 >= 0.4
16
Example 12.b 11 = 2 * x i / (2 - 1 + || x ||) = 2 * 1 / (1 + 2) =.667 b 21 =.667 b 31 = b 41 = 0 t 11 = x 1 = 1 t 12 = 1 t 13 = t 14 = 0
17
Exercise Show the network after the training example (0 0 0 1) is processed.
18
Observations Typically, stable weight matrices are obtained quickly. The cluster units are all topologically independent of one another. We have just looked at the fast learning version of ART1. There is also a slow learning version that updates just one weight per training example.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.