Presentation is loading. Please wait.

Presentation is loading. Please wait.

Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.

Similar presentations


Presentation on theme: "Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004."— Presentation transcript:

1 Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004

2 Sule Yildirim, IDI, 01/11/20042 Recurrent Neural Networks A recurrent neural network has feedback loops from its outputs to its inputs. The presence of such loops has a profound impact on the learning capability of the network. After applying a new input, the network output is calculated and fed back to adjust the input. This process is repeated until the outcome becomes constant. In 1982, John Hopfield formulated the physical principle of storing information in a dynamically stable network.

3 Sule Yildirim, IDI, 01/11/20043 Single-layer n-neuron Hopfield Network 1 2 i n x1x1 x2x2 xixi xnxn............ y1y1 y2y2 yiyi ynyn............

4 Sule Yildirim, IDI, 01/11/20044 Activation Function The current state of the network is determined by the current outputs of all neurons, y1, y2, …, yn. Thus, for a single-layer n-neuron network, the state can be defined by the state vector as Where X is a neuron’s weighted input.

5 Sule Yildirim, IDI, 01/11/20045 Synaptic weights in the Hopfield Network In the Hopfield Network, synaptic weights between neurons are usually represented in the matrix form as follows: where M is the number of states to be memorized by the network, Y m is the n-dimensional binary vector. I is nxn identity matrix, Superscript T denotes a matrix transposition

6 Sule Yildirim, IDI, 01/11/20046 Geometric Representation of the Hopfield Network (1,1, -1)(-1, 1, -1) (1,1, -1) (-1, 1, 1) (-1, -1, 1)(1, -1, 1) (1, -1, -1) (-1, -1, -1) A network with n neurons has 2 n possible states and is associated with an n-dimensional hypercube. When a new input vector is applied, the network moves from one-state vertex to another until it becomes stable.

7 Sule Yildirim, IDI, 01/11/20047 An example of memorization Memorize the two states, (1,1,1) and (-1,-1,-1). Transposed form of these vectors: The 3 x 3 identity matrix is:

8 Sule Yildirim, IDI, 01/11/20048 Example Cont’d… The weight matrix is determined as follows: So, Next, the network is tested by the sequence of input vectors X 1 and X 2, which are equal to the output (or target) vectors Y 1 and Y 2, respectively.

9 Sule Yildirim, IDI, 01/11/20049 Example Cont’d…Network is tested. First activate the network by applying input vector X. Then, calculate the actual output vector Y, and finally, compare the result with the initial input vector X. where h is the threshold matrix. Assume all thresholds to be zero for this example. Thus, Y 1 =X 1 and Y 2 =X 2, so both states, (1,1,1) and (-1,-1,-1). are said to be stable.

10 Sule Yildirim, IDI, 01/11/200410 Example Cont’d…Other Possible States Possible state Iteration x 1 x 2 x 3 y 1 y 2 y 3 Fundamental mem. 1 1 10 1 1 1 1 1 1 1 1 1 -1 1 10-1 1 1 1 1 1 1 1 1 1 1 1 -1 10 1 -1 1 1 1 1 1 1 1 1 1 1 1 -1 0 1 1 -1 1 1 1 1 1 1 1 1 -1 -1 -10-1 -1 -1 -1 -1 -1-1 -1 -1 -1 -1 10-1 -1 1 -1 -1 -1 1-1 -1 -1 -1 -1 -1-1 -1 -1 -1 1 -10-1 1 -1 -1 -1 -1 1-1 -1 -1 -1 -1 -1-1 -1 -1 1 -1 -10 1 -1 -1 -1 -1 -1 1-1 -1 -1 -1 -1 -1-1 -1 -1 Inputs Outputs

11 Sule Yildirim, IDI, 01/11/200411 Example Cont’d…Error compared to fundamental memory The fundamental memory (1,1,1) attracts unstable states (-1,1,1), (1,-1,1) and (1,1,-1). The fundamental memory (-1,-1,-1) attracts unstable states (-1,-1,1), (-1,1,-1) and (1,-1,-1).

12 Sule Yildirim, IDI, 01/11/200412 Hopfield Network Training Algorithm Step 1: Storage The n-neuron Hopfield network is required to store a set of M fundamental memories, Y 1, Y 2,… Y M. The synaptic weight from neuron i to neuron j is calculated as where y m,i and y m,j are the ith and jth elements of the fundamental memory Y M, respectively.

13 Sule Yildirim, IDI, 01/11/200413 Hopfield Network Training Algorithm In matrix form, the synaptic weights between neurons are represented as The Hopfiled network can store a set of fundamental memories if the weight matrix is symmetrical, with zeros in its main diagonal. Where w ij = w ji. Once the weights are calculated, they remain fixed.

14 Sule Yildirim, IDI, 01/11/200414 Hopfield Network Training Algorithm Step 2: Testing The network must recall any fundamental memory Y m when presented with it as an input. where y m,i is the ith element of the actual vector Y m, and x m,j is the jth element of the input vector X m. In matrix form,

15 Sule Yildirim, IDI, 01/11/200415 Hopfield Network Training Algorithm Step 3: Retrieval (If all fundamental memories are called perfectly proceed to this step.) Present an unknown n-dimensional vector(probe), X, to the network and retrieve a stable state. That is, a)Initialize the retrieval algorithm of the Hopfield network by setting and calculate the initial state for each neuron

16 Sule Yildirim, IDI, 01/11/200416 Hopfield Network Training Algorithm Step 3: Retrieval where x j (0) is the jth element of the probe vector X at iteration p=0, and y j (0) is the state of neuron i at iteration p=0. In matrix form, the state vector at iteration p=0 is presented as b)Update the elements of the state vector, Y(p), according to the following rule:

17 Sule Yildirim, IDI, 01/11/200417 Hopfield Network Training Algorithm Step 3: Retrieval Neurons for updating are selected asynchronously, that is, randomly and one at a time. Repeat the iteration until the state vector becomes unchanged, or in other words, a stable state is reached. The condition for stability can be defined as:


Download ppt "Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004."

Similar presentations


Ads by Google