Download presentation
Presentation is loading. Please wait.
Published byLoraine Cross Modified over 9 years ago
1
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht murre@psy.uva.nl
2
Toets Op welke wijze abstraheert een neuraal network neuron (‘node’) van een biologisch neuron? Noem tenminste 5 kenmerken
3
Overview Interactive activation model Hopfield networks Constraint satisfaction Attractors Traveling salesman problem Hebb rule and Hopfield networks Bidirectional associative networks Linear associative networks
4
Much of perception is dealing with ambiguity LAB
5
Many interpretations are processed in parallel CAB
6
The final interpretation must satisfy many constraints In the recognition of letters and words: i. Only one word can occur at a given position ii. Only one letter can occur at a given position iii. A letter-on-a-position activates a word iv. A feature-on-a-position activates a letter
7
L..C...A...P..B LAPCAPCAB
8
i. Only one word can occur at a given position LAPCAPCAB L..C...A...P..B
9
ii. Only one letter can occur at a given position L..C...A...P..B LAPCAPCAB
10
iii. A letter-on-a-position activates a word L..C...A...P..B LAPCAPCAB
11
iv. A feature-on-a-position activates a letter L..C...A...P..B LAPCAPCAB
12
L..C...A...P..B LAPCAPCAB Recognition of a letter is a process of constraint satisfaction
13
L..C...A...P..B LAPCAPCAB Recognition of a letter is a process of constraint satisfaction
14
L..C...A...P..B LAPCAPCAB Recognition of a letter is a process of constraint satisfaction
15
L..C...A...P..B LAPCAPCAB Recognition of a letter is a process of constraint satisfaction
16
L..C...A...P..B LAPCAPCAB Recognition of a letter is a process of constraint satisfaction
17
Hopfield (1982) Bipolar activations –-1 or 1 Symmetric weights (no self weights) –w ij = w ji Asynchronous update rule –Select one neuron randomly and update it Simple threshold rule for updating
18
Energy of a Hopfield network Energy E = - ½ i,j w ji a i a j E = - ½ i (w ji a i + w ij a i ) a j = - i w ji a i a j Net input to node j is i w ji a i = net j Thus, we can write E = - net j a j
19
Given a net input, net j, find a j so that - net j a j is minimized If net j is positive set a j to 1 If net j is negative set a j to -1 If net j is zero, don’t care (leave a j as is) This activation rule ensures that the energy never increases Hence, eventually the energy will reach a minimum value
20
Attractor An attractor is a stationary network state (configuration of activation values) This is a state where it is not possible to minimize the energy any further by just flipping one activation value It may be possible to reach a deeper attractor by flipping many nodes at once Conclusion: The Hopfield rule does not guarantee that an absolute energy minimum will be reached
21
Attractor Local minimum Global minimum
22
Example: 8-Queens problem Place 8 queens on a chess board such that they are not able to take each other This implies the following three constraints: –1 queen per column –1 queen per row –1 queen on any diagonal This encoding of the constraints ensures that the attractors of the network correspond to valid solutions
23
The constraints are satisfied by inhibitory connections Column Row Diagonals
24
Problem: how to ensure that exactly 8 nodes are 1? A term may be added to control for this in the activation rule Binary nodes may be used with a bias It is also possible to use continuous valid nodes with Hopfield networks (e.g, between 0 and 1)
25
Traveling Salesman Problem
26
The energy minimization question can also be turned around Given a i and a j, how should we set the weight w ji = w ji so that the energy is minimized? E = - ½ w ji a i a j, so that –when a i a j = 1, w ji must be positive –when a i a j = -1, w ji must be negative For example, w ji = a i a j, where is a learning constant
27
Hebb and Hopfield When used with Hopfield type activation rules, the Hebb learning rule places patterns at attractors If a network has n nodes, 0.15n random patterns can be reliably stored by such a system For complete retrieval it is typically necessary to present the network with over 90% of the original pattern
28
Bidirectional Associative Memories (BAM, Kosko 1988) Uses binary nodes (0 or 1) Symmetric weights Input and output layer Layers are updated in order, using threshold activation rule Nodes within a layer are updated synchronously
29
BAM BAM is in fact a Hopfield network with two layers of nodes Within a layer, weights are 0 These neurons are not dependent on each other (no mutual inputs) If updated synchronously, there is therefore no danger of increasing the network energy BAM is similar to the core of Grossberg’s Adaptive Resonance Theory (Lecture 4)
30
Linear Associative Networks Invented by Kohonen (1972), Nakano (1972), and by Anderson (1972) Two layers Linear activation rule –Activation is equal to net input Can store patterns Their behavior is mathematically tractable using matrix algebra
31
Associating an input vector p with an output vector q Storage: W = qp T with = (p T p) -1 Recall: Wp = qp T p = p T pq = q
32
Inner product p T p gives a scalar 3 0 1 4 0 1 301401301401 p pTpT 9 0 1 16 0 1 9 0 1 16 0 1 27 = (p T p) -1 = 1/27
33
Outer product qp T gives a matrix 3 0 1 4 0 1 120241120241 6 0 2 8 0 2 0 0 0 0 0 0 6 0 2 8 0 2 12 0 4 16 0 4 3 0 1 4 0 1 q output vector p T input vector W/ weight matrix divided by constant
34
Final weight matrix W = qp T
35
Recall: Wp = q 0.11 3 + 0 0 + 0.04 1 + 0.15 4 + 0 0 + 0.04 1 = 1 0.22 3 + 0 0 + 0.07 1 + 0.30 4 + 0 0 + 0.07 1 = 2 Weight matrix Input vector Output vector
36
Storing n patterns Storage: W k = k q k p k T, with k = p k T p k W = W 1 + W 2 + … + W k + … + W n Recall: Wp k = k q k p k T p k + Error = q + Error Error = W 1 p k + … + W h p k + … + W n p k is 0 only if p h T p k for all h k
37
Conclusion LANs work only well, if the input patterns are (nearly) orthogonal If an input pattern overlaps with others, then recall will be contaminated with the output patterns of those overlapping patterns It is, therefore, important that input patterns are orthogonal (i.e., have little overlap)
38
LANs have limited representational power For each three-layer LAN, there exists an equivalent two layer LAN Proof: Suppose that q = Wp and r = Vq, than we have r = Vq = VWp = Xp with X = VW p q r W V p r X
39
Summing up There is a wide variety of ways to store and retrieve patterns in neural networks based on the Hebb rule –Willshaw network (associator) –BAM –LAN –Hopfield network In Hopfield networks, stored patterns can be viewed as attractors
40
Summing up Finding an attractor is a process of constraint satisfaction. It can can be used as: –A recognition model –A memory retrieval model –A way of solving the traveling salesman problem and other difficult problems
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.