Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.

Similar presentations


Presentation on theme: "Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent."— Presentation transcript:

1 Artificial Neural Networks Ch15

2 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent networks  The foundation for the adaptive resonance theory The biological motivation for the Grossberg network: the human visual system

3 3 Biological Motivation: Vision Eyeball and Retina

4 Leaky Integrator In mathematics, a leaky integrator equation is a specific differential equation, used to describe a component or system that takes the integral of an input, but gradually leaks a small amount of input over time. It appears commonly in hydraulics, electronics, and neuroscience where it can represent either a single neuron or a local population of neurons 4

5 5 Basic Nonlinear Model Leaky Integrator: : the system time constant  (input) is constant and  n(t):response

6 P(t)=1 n(0)=0, = 1, 0.5, 0.25, 0.125 6

7 7 Shunting Model Excitatory: the input that causes the response to increase –– p +. Inhibitory: the input that causes the response to decrease –– p . Biases b + and b  (nonnegative) determine the upper and lower limits on the response. Linear decay term Nonlinear gain control

8 8 Grossberg Network Three components: Layer 1, Layer 2 and the adaptive weights. The network includes short-term memory (STM) and long-term memory (LTM) mechanisms, and performs adaptation, filtering, normalization and contrast enhancement.

9 9 Layer 1 Receives external inputs and normalizes the intensity of the input pattern.

10 10 On-Center / Off-Surround Excitatory input: Inhibitory input: This type of connection pattern produces a normalization of the input pattern.

11 11 Normalization Inhibitory bias, Excitatory bias Steady state neuron output where is the relative intensity of input i. The input vector is normalized

12 12 Layer 1 Response p 1 2 8 = p 2 10 40 =

13 13 Characteristics of Layer 1 The network is sensitive to relative intensities of the input pattern, rather than absolute intensities. The output of Layer 1 is a normalized version of the input pattern. The on-center/off-surround connection pattern and the nonlinear gain control of the shunting model produce the normalization effect. The operation of Layer 1 explains the brightness constancy and brightness contrast characteristics of the human visual system.

14 14 Layer 2 A layer of continuous-time instar performs several functions.

15 15 Functions of Layer 2 It normalizes the total activity in the layer. It contrast enhances its pattern, so that the neuron that receives the largest input will dominate the response (like winner-take-all competition in the Hamming network). It operates as a short-term memory by storing the contrast-enhanced pattern.

16 16 Feedback Connections The feedback enables the network to store a pattern, even after the input has been removed. The feedback also performs the competition that causes the contrast enhancement of the pattern.

17 17 Equation of Layer 2 provides on-center feedback connection provides off-surround feedback connection W 2 consists of adaptive weights. Its rows, after training, will represent the prototype patterns. Layer 2 performs a competition between the neurons, which tends to contrast enhance the output pattern – maintaining large outputs while attenuating small outputs.

18 18 Layer 2 Example Correlation between prototype 1 and input. Correlation between prototype 2 and input.

19 19 Layer 2 Response t w 2 1  T a 1 w 2 2  T a 1 n 1 2 t() n 2 2 t() Contrast Enhancement and Storage Input to neuron 1: Input to neuron 2: a 1 (the steady state resulted obtained from Layer 1 example) is applied for 0.25 sec. and then removed. Input vector:

20 20 Characteristics of Layer 2 Even before the input is removed, some contrast enhancement is performed. After the input has been set to zero, the network further enhances the contrast and stores the patterns.  It is the nonlinear feedback that enables the network to store pattern, and the on- center/off-surround connection patterns that causes the contrast enhancement.

21 21 Transfer Functions See Fig. 15.19 Linear: perfect storage of any pattern, but amplify noise. Slower than linear: amplify noise, reduce contrast. Faster than linear: winner-take-all, suppress noise, quantize total activity. Sigmoid: suppress noise, contrast enhance, not quantized.

22 22 Learning Law The rows of the adaptive weights W 2 will represent patterns that have been stored and that the network will be able to recognize.  Long term memory (LTM) Learning law #1: Learning law #2: turn off learning (and forgetting) when is NOT active. decay term Hebbian-type learning

23 23 Response of Adaptive Weights For Pattern 1: For Pattern 2: w 11  2 t() w 12  2 t() w 21  2 t() w 22  2 t() The first row of the weight matrix is updated when n 1 2 (t) is active, and the second row of the weight matrix is updated when n 2 2 (t) is active. Two different input patterns are alternately presented to the network for periods of 0.2 seconds at a time.

24 24 Relation to Kohonen Law Grossberg learning law (continuous-time): Euler approximation for the derivative: Discrete-time approximation to Grossberg law:

25 25 Relation to Kohonen Law Rearrange terms: Assume that a faster-than-linear transfer function (winner-take-all) is used in Layer 2. Kohonen law:

26 26 Three Major Differences The Grossberg network is a continuous-time network. Layer 1 of Grossberg network automatically normalizes the input vectors. Layer 2 of Grossberg network can perform a “soft” competition, rather than the winner-take-all competition of the Kohonen network.  This soft competition allows more than one neuron in Layer 2 to learn.  Cause the Grossberg network to operate as a feature map.


Download ppt "Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent."

Similar presentations


Ads by Google