November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Pattern Association.
Perceptron Lecture 4.
Beyond Linear Separability
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
NEURAL NETWORKS Perceptron
Computer Vision Lecture 18: Object Recognition II
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few.
Kostas Kontogiannis E&CE
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Simple Neural Nets For Pattern Classification
Radial Basis Functions
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
PART 7 Constructing Fuzzy Sets 1. Direct/one-expert 2. Direct/multi-expert 3. Indirect/one-expert 4. Indirect/multi-expert 5. Construction from samples.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
An Illustrative Example
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
CHAPTER 3 Pattern Association.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
December 7, 2010Neural Networks Lecture 21: Hopfield Network Convergence 1 The Hopfield Network The nodes of a Hopfield network can be updated synchronously.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Neural Networks Lecture 17: Self-Organizing Maps
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Radial Basis Function Networks
Neural Networks Lecture 8: Two simple learning algorithms
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Unsupervised learning
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Hebbian Coincidence Learning
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 26, 2013Computer Vision Lecture 15: Object Recognition III 1 Backpropagation Network Structure Perceptrons (and many other classifiers) can only.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Chapter 6 Neural Network.
Lecture 9 Model of Hopfield
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
April 12, 2016Introduction to Artificial Intelligence Lecture 19: Neural Network Application Design II 1 Now let us talk about… Neural Network Application.
Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
Lecture 39 Hopfield Network
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Supervised Learning in ANNs
Neural Networks.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
ECE 471/571 - Lecture 19 Hopfield Network.
network of simple neuron-like computing elements
Recurrent Networks A recurrent network is characterized by
The Naïve Bayes (NB) Classifier
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
Random Neural Network Texture Model
Presentation transcript:

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns and output the one that is associated with the current input. The patterns are stored in the interconnections between neurons, similarly to how memory in our brain is assumed to work. Hetero-association: Mapping input vectors to output vectors in a different vector space (e.g., English-to- German translator) Auto-association: Input and output vectors are in the same vector space (e.g., spelling corrector).

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 2 Interpolative Associative Memory For hetero-association, we can use a simple two-layer network of the following form: I1I1I1I1 I2I2I2I2 ININININ … O1O1O1O1 O2O2O2O2 OMOMOMOM …

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 3 Interpolative Associative Memory Sometimes it is possible to obtain a training set with orthonormal (that is, normalized and pairwise orthogonal) input vectors. In that case, our two-layer network with linear neurons can solve its task perfectly and does not even require training. We call such a network an interpolative associative memory. You may ask: How does it work?

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 4 Interpolative Associative Memory Well, if you look at the network’s output function you will find that this is just like a matrix multiplication:

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 5 Interpolative Associative Memory With an orthonormal set of exemplar input vectors (and any associated output vectors) we can simply calculate a weight matrix that realizes the desired function and does not need any training procedure. For exemplars (x 1, y 1 ), (x 2, y 2 ), …, (x P, y P ) we obtain the following weight matrix W: Note that an N-dimensional vector space cannot have a set of more than N orthonormal vectors!

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 6 Interpolative Associative Memory Example: Assume that we want to build an interpolative memory with three input neurons and three output neurons. We have the following three exemplars (desired input- output pairs):

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 7 Interpolative Associative Memory Then If you set the weights w mn to these values, the network will realize the desired function.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 8 Interpolative Associative Memory So if you want to implement a linear function R N  R M and can provide exemplars with orthonormal input vectors, then an interpolative associative memory is the best solution. It does not require any training procedure, realizes perfect matching of the exemplars, and performs plausible interpolation for new input vectors. Of course, this interpolation is linear.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 9 The Hopfield Network The Hopfield model is a single-layered recurrent network. Like the associative memory, it is usually initialized with appropriate weights instead of being trained. The network structure looks as follows: X1X1X1X1 X2X2X2X2 XNXNXNXN …

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 10 The Hopfield Network We will first look at the discrete Hopfield model, because its mathematical description is more straightforward. In the discrete model, the output of each neuron is either 1 or –1. In its simplest form, the output function is the sign function, which yields 1 for arguments  0 and –1 otherwise.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 11 The Hopfield Network For input-output pairs (x 1, y 1 ), (x 2, y 2 ), …, (x P, y P ), we can initialize the weights in the same way as we did it with the associative memory: This is identical to the following formula: where x p (j) is the j-th component of vector x p, and y p (i) is the i-th component of vector y p.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 12 The Hopfield Network In the discrete version of the model, each component of an input or output vector can only assume the values 1 or –1. The output of a neuron i at time t is then computed according to the following formula: This recursion can be performed over and over again. In some network variants, external input is added to the internal, recurrent one.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 13 The Hopfield Network Usually, the vectors x p are not orthonormal, so it is not guaranteed that whenever we input some pattern x p, the output will be y p, but it will be a pattern similar to y p. Since the Hopfield network is recurrent, its behavior depends on its previous state and in the general case is difficult to predict. However, what happens if we initialize the weights with a set of patterns so that each pattern is being associated with itself, (x 1, x 1 ), (x 2, x 2 ), …, (x P, x P )?

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 14 The Hopfield Network This initialization is performed according to the following equation: You see that the weight matrix is symmetrical, i.e., w ij = w ji. We also demand that w ii = 0, in which case the network shows an interesting behavior. It can be mathematically proven that under these conditions the network will reach a stable activation state within an finite number of iterations.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 15 The Hopfield Network And what does such a stable state look like? The network associates input patterns with themselves, which means that in each iteration, the activation pattern will be drawn towards one of those patterns. After converging, the network will most likely present one of the patterns that it was initialized with. Therefore, Hopfield networks can be used to restore incomplete or noisy input patterns.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 16 The Hopfield Network Example: Image reconstruction A 20  20 discrete Hopfield network was trained with 20 input patterns, including the one shown in the left figure and 19 random patterns as the one on the right.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 17 The Hopfield Network After providing only one fourth of the “face” image as initial input, the network is able to perfectly reconstruct that image within only two iterations.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 18 The Hopfield Network Adding noise by changing each pixel with a probability p = 0.3 does not impair the network’s performance. After two steps the image is perfectly reconstructed.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 19 The Hopfield Network However, for noise created by p = 0.4, the network is unable the original image. Instead, it converges against one of the 19 random patterns.

November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 20 The Hopfield Network Problems with the Hopfield model are that it cannot recognize patterns that are shifted in position, it cannot recognize patterns that are shifted in position, it can only store a very limited number of different patterns. it can only store a very limited number of different patterns. Nevertheless, the Hopfield model constitutes an interesting neural approach to identifying partially occluded objects and objects in noisy images. These are among the toughest problems in computer vision.