Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Advertisements

Bioinspired Computing Lecture 16
Chapter3 Pattern Association & Associative Memory
Pattern Association.
Computational Intelligence
Introduction to Neural Networks Computing
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Kostas Kontogiannis E&CE
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
The back-propagation training algorithm
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
An Illustrative Example
Before we start ADALINE
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Network
Artificial neural networks:
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Chapter 6 Associative Models. Introduction Associating patterns which are –similar, –contrary, –in close proximity (spatial), –in close succession (temporal)
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Neural Networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
© Negnevitsky, Pearson Education, Lecture 8 (chapter 6) Artificial neural networks: Supervised learning The perceptron Quick Review The perceptron.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Hebbian Coincidence Learning
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
10/17/2015Intelligent Systems and Soft Computing1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction,
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
EEE502 Pattern Recognition
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24,
Chapter 6 Neural Network.
CS623: Introduction to Computing with Neural Nets (lecture-12) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Lecture 9 Model of Hopfield
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Lecture 39 Hopfield Network
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
 Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning n Introduction, or how the brain works n The neuron.
Learning in Neural Networks
Ch7: Hopfield Neural Model
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Real Neurons Cell structures Cell body Dendrites Axon
Intelligent Systems and Soft Computing
Synaptic Dynamics: Unsupervised Learning
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Hebb and Perceptron.
ECE 471/571 - Lecture 19 Hopfield Network.
Computational Intelligence
Computational Intelligence
CS623: Introduction to Computing with Neural Nets (lecture-11)
AI Lectures by Engr.Q.Zia
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004

Sule Yildirim, IDI, 01/11/20042 Recurrent Neural Networks A recurrent neural network has feedback loops from its outputs to its inputs. The presence of such loops has a profound impact on the learning capability of the network. After applying a new input, the network output is calculated and fed back to adjust the input. This process is repeated until the outcome becomes constant. In 1982, John Hopfield formulated the physical principle of storing information in a dynamically stable network.

Sule Yildirim, IDI, 01/11/20043 Single-layer n-neuron Hopfield Network 1 2 i n x1x1 x2x2 xixi xnxn y1y1 y2y2 yiyi ynyn

Sule Yildirim, IDI, 01/11/20044 Activation Function The current state of the network is determined by the current outputs of all neurons, y1, y2, …, yn. Thus, for a single-layer n-neuron network, the state can be defined by the state vector as Where X is a neuron’s weighted input.

Sule Yildirim, IDI, 01/11/20045 Synaptic weights in the Hopfield Network In the Hopfield Network, synaptic weights between neurons are usually represented in the matrix form as follows: where M is the number of states to be memorized by the network, Y m is the n-dimensional binary vector. I is nxn identity matrix, Superscript T denotes a matrix transposition

Sule Yildirim, IDI, 01/11/20046 Geometric Representation of the Hopfield Network (1,1, -1)(-1, 1, -1) (1,1, -1) (-1, 1, 1) (-1, -1, 1)(1, -1, 1) (1, -1, -1) (-1, -1, -1) A network with n neurons has 2 n possible states and is associated with an n-dimensional hypercube. When a new input vector is applied, the network moves from one-state vertex to another until it becomes stable.

Sule Yildirim, IDI, 01/11/20047 An example of memorization Memorize the two states, (1,1,1) and (-1,-1,-1). Transposed form of these vectors: The 3 x 3 identity matrix is:

Sule Yildirim, IDI, 01/11/20048 Example Cont’d… The weight matrix is determined as follows: So, Next, the network is tested by the sequence of input vectors X 1 and X 2, which are equal to the output (or target) vectors Y 1 and Y 2, respectively.

Sule Yildirim, IDI, 01/11/20049 Example Cont’d…Network is tested. First activate the network by applying input vector X. Then, calculate the actual output vector Y, and finally, compare the result with the initial input vector X. where h is the threshold matrix. Assume all thresholds to be zero for this example. Thus, Y 1 =X 1 and Y 2 =X 2, so both states, (1,1,1) and (-1,-1,-1). are said to be stable.

Sule Yildirim, IDI, 01/11/ Example Cont’d…Other Possible States Possible state Iteration x 1 x 2 x 3 y 1 y 2 y 3 Fundamental mem Inputs Outputs

Sule Yildirim, IDI, 01/11/ Example Cont’d…Error compared to fundamental memory The fundamental memory (1,1,1) attracts unstable states (-1,1,1), (1,-1,1) and (1,1,-1). The fundamental memory (-1,-1,-1) attracts unstable states (-1,-1,1), (-1,1,-1) and (1,-1,-1).

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm Step 1: Storage The n-neuron Hopfield network is required to store a set of M fundamental memories, Y 1, Y 2,… Y M. The synaptic weight from neuron i to neuron j is calculated as where y m,i and y m,j are the ith and jth elements of the fundamental memory Y M, respectively.

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm In matrix form, the synaptic weights between neurons are represented as The Hopfiled network can store a set of fundamental memories if the weight matrix is symmetrical, with zeros in its main diagonal. Where w ij = w ji. Once the weights are calculated, they remain fixed.

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm Step 2: Testing The network must recall any fundamental memory Y m when presented with it as an input. where y m,i is the ith element of the actual vector Y m, and x m,j is the jth element of the input vector X m. In matrix form,

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm Step 3: Retrieval (If all fundamental memories are called perfectly proceed to this step.) Present an unknown n-dimensional vector(probe), X, to the network and retrieve a stable state. That is, a)Initialize the retrieval algorithm of the Hopfield network by setting and calculate the initial state for each neuron

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm Step 3: Retrieval where x j (0) is the jth element of the probe vector X at iteration p=0, and y j (0) is the state of neuron i at iteration p=0. In matrix form, the state vector at iteration p=0 is presented as b)Update the elements of the state vector, Y(p), according to the following rule:

Sule Yildirim, IDI, 01/11/ Hopfield Network Training Algorithm Step 3: Retrieval Neurons for updating are selected asynchronously, that is, randomly and one at a time. Repeat the iteration until the state vector becomes unchanged, or in other words, a stable state is reached. The condition for stability can be defined as: