Supervised Hebbian Learning

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Introduction to Artificial Neural Networks
Introduction to Neural Networks Computing
Perceptron Learning Rule
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Chapter 7 Supervised Hebbian Learning.
Performance Optimization
Simple Neural Nets For Pattern Classification
A Review: Architecture
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Perceptron Learning Rule
An Illustrative Example
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
Artificial neural networks.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial-Basis Function Networks
Neuron Model and Network Architecture
Neural Networks Lecture 8: Two simple learning algorithms
Learning Processes.
Artificial Neural Network Unsupervised Learning
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Hebbian Coincidence Learning
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
The Perceptron. Perceptron Pattern Classification One of the purposes that neural networks are used for is pattern classification. Once the neural network.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Networks 2nd Edition Simon Haykin
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Real Neurons Cell structures Cell body Dendrites Axon
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
Financial Informatics –XVII: Unsupervised Learning
Hebb and Perceptron.
Biological and Artificial Neuron
Biological and Artificial Neuron
Biological and Artificial Neuron
CSE 573 Introduction to Artificial Intelligence Neural Networks
Backpropagation.
Artificial neurons Nisheeth 10th January 2019.
Simple learning in connectionist networks
Introduction to Neural Network
Supervised Hebbian Learning
Presentation transcript:

Supervised Hebbian Learning CHAPTER 7 Supervised Hebbian Learning Ming-Feng Yeh

Objectives The Hebb rule, proposed by Donald Hebb in 1949, was one of the first neural network learning laws. A possible mechanism for synaptic modification in the brain. Use the linear algebra concepts to explain why Hebbian learning works. The Hebb rule can be used to train neural networks for pattern recognition. Ming-Feng Yeh

Hebb’s Postulate Hebbian learning (The Organization of Behavior) When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it; some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. 當細胞A的軸突到細胞B的距離進到足夠刺激它,且反覆或持續地刺激B,則在這兩個細胞或其中一個將會發生某種成長過程或代謝反應,以增加A對B的刺激效果。 Ming-Feng Yeh

Linear Associator W SR p n a a = Wp R1 S1 R S  The linear associator is an example of a type of neural network called an associator memory.  The task of an associator is to learn Q pairs of prototype input/output vectors: {p1,t1}, {p2,t2},…, {pQ,tQ}.  If p = pq, then a = tq. q = 1,2,…,Q. If p = pq + , then a = tq + . Ming-Feng Yeh

Hebb Learning Rule If two neurons on either side of a synapse are activated simultaneously, the strength of the synapse will increase.  The connection (synapse) between input pj and output ai is the weight wij.  Unsupervised learning rule Not only do we increase the weight when pj and ai are positive, but we also increase the weight when they are both negative. Supervised learning rule Ming-Feng Yeh

Supervised Hebb Rule Assume that the weight matrix is initialized to zero and each of the Q input/output pairs are applied once to the supervised Hebb rule. (Batch operation)  Ming-Feng Yeh

Performance Analysis  Assume that the pq vectors are orthonormal (orthogonal and unit length), then  If pq is input to the network, then the network output can be computed  If the input prototype vectors are orthonormal, the Hebb rule will produce the correct output for each input.  Ming-Feng Yeh

Performance Analysis  Assume that each pq vector is unit length, but they are not orthogonal. Then  error The magnitude of the error will depend on the amount of correlation between the prototype input patterns.  Ming-Feng Yeh

Orthonormal Case Success!! Ming-Feng Yeh

Not Orthogonal Case The outputs are close, but do not quite match the target outputs. Ming-Feng Yeh

Solved Problem P7.2 i. Orthogonal, not orthonormal, ii. Ming-Feng Yeh

Solutions of Problem P7.2 iii. Hamming dist. = 1 Hamming dist. = 2 Ming-Feng Yeh

Pseudoinverse Rule   Performance index:  Goal: choose the weight matrix W to minimize F(W).  When the input vectors are not orthogonal and we use the Hebb rule, then F(W) will be not be zero, and it is not clear that F(W) will be minimized.   If the P matrix has an inverse, the solution is  Ming-Feng Yeh

Pseudoinverse Rule P matrix has an inverse iff P must be a square matrix. Normally the pq vectors (the column of P) will be independent, but R (the dimension of pq, no. of rows) will be larger than Q (the number of pq vectors, no. of columns). P does not exist any inverse matrix.  The weight matrix W that minimizes the performance index is given by the pseudoinverse rule .  where P+ is the Moore-Penrose pseudoinverse. Ming-Feng Yeh

Moore-Penrose Pseudoinverse The pseudoinverse of a real matrix P is the unique matrix that satisfies  When R (no. of rows of P) > Q (no. of columns of P) and the columns of P are independent, then the pseudoinverse can be computed by .  Note that we do NOT need normalize the input vectors when using the pseudoinverse rule.  Ming-Feng Yeh

Example of Pseudoinverse Rule Ming-Feng Yeh

Autoassociative Memory The linear associator using the Hebb rule is a type of associative memory ( tq  pq ). In an autoassociative memory the desired output vector is equal to the input vector ( tq = pq ).  An autoassociative memory can be used to store a set of patterns and then to recall these patterns, even when corrupted patterns are provided as input.  W 3030 30 p 301 a n Ming-Feng Yeh

Corrupted & Noisy Versions Recovery of 50% Occluded Patterns  Recovery of 67% Occluded Patterns  Recovery of Noisy Patterns  Ming-Feng Yeh

Variations of Hebbian Learning Many of the learning rules have some relationship to the Hebb rule.  The weight matrices of Hebb rule have very large elements if there are many prototype patterns in the training set.  Basic Hebb rule:  Filtered learning: adding a decay term, so that the learning rule behaves like a smoothing filter, remembering the most recent inputs more clearly.  Ming-Feng Yeh

Variations of Hebbian Learning Basic Hebb rule:  Delta rule: replacing the desired output with the difference between the desired output and the actual output. It adjusts the weights so as to minimize the mean square error.  The delta rule can update the weights after each new input pattern is presented.  Unsupervised Hebb rule:  Ming-Feng Yeh

Solved Problem P7.6 + p a W n 1 b Wp = 0 p2 p1 i. 11 n 1 b W 2 p 21 p1 p2 Wp = 0 Why is a bias required to solve this problem? The decision boundary for the perceptron network is Wp + b = 0. If these is no bias, then the boundary becomes Wp = 0 which is a line that must pass through the origin. No decision boundary that passes through the origin could separate these two vectors. i. Ming-Feng Yeh

Solved Problem P7.6 Use the pseudoinverse rule to design a network with bias to solved this problem. Treat the bias as another weight, with an input of 1. ii. Wp + b = 0 p1 p2 Ming-Feng Yeh

Solved Problem P7.7 Up to now, we have represented patterns as vectors by using “1” and “–1” to represent dark and light pixels, respectively. What if we were to use “1” and “0” instead? How should the Hebb rule be changed? Bipolar {–1,1} representation:  Binary {0,1} representation:  , where 1 is a vector of ones.  Ming-Feng Yeh

Binary Associative Network + a S1 n 1 b SR R R1 S n = Wp + b a = hardlim(Wp + b) Ming-Feng Yeh