ECE 471/571 - Lecture 19 Hopfield Network.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Perceptron Lecture 4.
Presentation By Utkarsh Trivedi Y8544
Memristor in Learning Neural Networks
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Lecture 14 – Neural Networks
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
© Negnevitsky, Pearson Education, Lecture 8 (chapter 6) Artificial neural networks: Supervised learning The perceptron Quick Review The perceptron.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Ten MC Questions taken from the Text, slides and described in class presentation. COSC 4426 AJ Boulay Julia Johnson.
Chapter 18 Connectionist Models
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Chapter 6 Neural Network.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Today’s Lecture Neural networks Training
Fall 2004 Backpropagation CS478 - Machine Learning.
Supervised Learning in ANNs
Learning in Neural Networks
Neural Networks.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
Ranga Rodrigo February 8, 2014
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Counter propagation network (CPN) (§ 5.3)
ECE 471/571 - Lecture 17 Back Propagation.
Artificial Neural Networks
An Illustrative Example.
Lecture Notes for Chapter 4 Artificial Neural Networks
Ch4: Backpropagation (BP)
The Network Approach: Mind as a Web
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
An Illustrative Example.
Ch4: Backpropagation (BP)
CSC 578 Neural Networks and Deep Learning
Artificial Neural Networks / Spring 2002
Presentation transcript:

ECE 471/571 - Lecture 19 Hopfield Network

Types of NN Recurrent (feedback during operation) Feedforward Hopfield Kohonen Associative memory Feedforward No feedback during operation (only during determination of weights) Perceptron MLP

Memory in Humans Human brain can lay down and recall of memories in both long-term and short-term fashions Associative or content-addressable Memory is not isolated - All memories are, in some sense, strings of memories We access the memory by its content – not by its location or the neural pathways Compare to the traditional computer memory Given incomplete or low resolution or partial information, the capability of reconstruction

A Simple Hopfield Network wd x1 x2 xd 1 y -b …… A Simple Hopfield Network Recurrent NN No distinct layers Every node is connected to every other node The connections are bidirectional 16x16 nodes

Properties Is able to store certain patterns in a similar fashion as human brain Given partial information, the full pattern can be recovered Robustness during an average lifetime many neurons will die but we do not suffer a catastrophic loss of individual memories (by the time we die we may have lost 20 percent of our original neurons). Guarantee of convergence We are guaranteed that the pattern will settle down after a long enough time to some fixed pattern. In the language of memory recall, if we start the network off with a pattern of firing which approximates one of the "stable firing patterns" (memories) it will "under its own steam" end up in the nearby well in the energy surface thereby recalling the original perfect memory.

Images are from http://www2.psy.uq.edu.au/~brainwav/Manual/Hopfield.html (no longer available) Examples

How Does It Work? A set of exemplar patterns are chosen and used to initialize the weights of the network. Once this is done, any pattern can be presented to the network, which will respond by displaying the exemplar pattern that is in some sense similar to the input pattern. The output pattern can be read off from the network by reading the states of the units in the order determined by the mapping of the components of the input vector to the units.

Four Components How to train the network? How to update a node? What sequence should use when updating nodes? How to stop?

Network Initialization Assumptions: The network has N units (nodes) The weight from node i to node j is wij wij = wji Each node has a threshold / bias value associated with it, bi We have M known patterns pi = (pi1,…,piN), i=1..M, each of which has N elements

Classification Suppose we have an input pattern (p1, …, pN) to be classified Suppose the state of the ith node is mi(t) Then mi(0) = pi Testing (S is the sigmoid function)

Why Converge? - Energy Descent Billiard table model Surface of billiard table -> energy surface Energy of the network The choice of the network weights ensures that minima of the energy function occur at (or near) points representing exemplar patterns

**Energy Descent

Reference John Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Science of the USA, 79(8):2554-2558, April 1982 Tutorial on Hopfield Networks http://www.cs.ucla.edu/~rosen/161/notes/hopfield.html