13 1 Associative Learning. 13 2 Simple Associative Network.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Introduction to Neural Networks Computing
Artificial Neural Networks (1)
Perceptron Learning Rule
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
-Artificial Neural Network- Chapter 2 Basic Model
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
An Illustrative Example.
4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.
Linear Transformations
Simple Neural Nets For Pattern Classification
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Pertemuan 15 ADAPTIVE RESONANCE THEORY Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Perceptron Learning Rule
Associative Learning.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Before we start ADALINE
1 Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
Lecture 09 Clustering-based Learning
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
CS532 Neural Networks Dr. Anwar Majid Mirza Lecture No. 3 Week2, January 22 nd, 2008 National University of Computer and Emerging.
Supervised Hebbian Learning
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
1 Back-Propagation. 2 Objectives A generalization of the LMS algorithm, called backpropagation, can be used to train multilayer networks. Backpropagation.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem:
Unsupervised learning
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
381 Self Organization Map Learning without Examples.
Framework For PDP Models Psych /719 Jan 18, 2001.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Chapter 5 Unsupervised learning
Pertemuan 7 JARINGAN INSTAR DAN OUTSTAR
Dr. Unnikrishnan P.C. Professor, EEE
Kohonen Self-organizing Feature Maps
Outline Associative Learning: Hebbian Learning
Associative Learning.
Competitive Networks.
network of simple neuron-like computing elements
Adaptive Resonance Theory
Grossberg Network.
Competitive Networks.
Adaptive Resonance Theory
Introduction to Neural Network
Unsupervised Networks Closely related to clustering
Perceptron Learning Rule
An Illustrative Example.
Perceptron Learning Rule
Associative Learning.
Supervised Hebbian Learning
Perceptron Learning Rule
Presentation transcript:

13 1 Associative Learning

13 2 Simple Associative Network

13 3 Banana Associator Unconditioned StimulusConditioned Stimulus

13 4 Unsupervised Hebb Rule Vector Form: Training Sequence:

13 5 Banana Recognition Example Initial Weights: Training Sequence: a1  hardlimw 0 p 0 1  w0  p1  0.5–+  hardlim10  01  –+  0 (no response) = == First Iteration (sight fails):  = 1

13 6 Example a2  hardlimw 0 p 0 2  w1  p2  0.5–+  hardlim11  01  –+  1 (banana) = == Second Iteration (sight works): Third Iteration (sight fails): Banana will now be detected if either sensor works.

13 7 Problems with Hebb Rule Weights can become arbitrarily large There is no mechanism for weights to decrease

13 8 Hebb Rule with Decay This keeps the weight matrix from growing without bound, which can be demonstrated by setting both a i and p j to 1:

13 9 Example: Banana Associator a1  hardlimw 0 p 0 1  w0  p1  0.5–+  hardlim10  01  –+  0 (no response) = == First Iteration (sight fails): a2  hardlimw 0 p 0 2  w1  p2  0.5–+  hardlim11  01  –+  1 (banana) = == Second Iteration (sight works):  = 0.1  = 1

13 10 Example Third Iteration (sight fails): Hebb RuleHebb with Decay

13 11 Problem of Hebb with Decay Associations will decay away if stimuli are not occasionally presented. If a i = 0, then If  = 0, this becomes Therefore the weight decays by 10% at each iteration where there is no stimulus.

13 12 Instar (Recognition Network)

13 Instar Operation The instar will be active when or w T 1 pw 1 p  cosb–  = For normalized vectors, the largest inner product occurs when the angle between the weight vector and the input vector is zero -- the input vector is equal to the weight vector. The rows of a weight matrix represent patterns to be recognized.

13 14 Vector Recognition b w 1 p –= If we set the instar will only be active when   =  0. b w 1 p –> If we set the instar will be active for a range of angles. As b is increased, the more patterns there will be (over a wider range of  ) which will activate the instar. w 1

13 15 Instar Rule Hebb with Decay Modify so that learning and forgetting will only occur when the neuron is active - Instar Rule: w ij q  w ij q1–  a i q  p j q  a i q  wq1–  –+= ij or Vector Form:

13 16 Graphical Representation For the case where the instar is active (a i = 1): or For the case where the instar is inactive (a i = 0):

13 17 Example

13 18 Training First Iteration (  =1):

13 19 Further Training (orange) h a2  hardlimw 0 p 0 2  Wp 2  2–+  = ardlim31  – 1– 2–+      1== a3  hardlimw 0 p 0 3  Wp 3  2–+  = hardlim30  11–1– 1 1– 1– 2–+      1== Orange will now be detected if either set of sensors works.

13 20 Kohonen Rule Learning occurs when the neuron’s index i is a member of the set X(q). We will see in Chapter 14 that this can be used to train all neurons in a given neighborhood.

13 21 Outstar (Recall Network)

13 22 Outstar Operation Suppose we want the outstar to recall a certain pattern a* whenever the input p = 1 is presented to the network. Let Then, when p = 1 and the pattern is correctly recalled. The columns of a weight matrix represent patterns to be recalled.

13 23 Outstar Rule For the instar rule we made the weight decay term of the Hebb rule proportional to the output of the network. For the outstar rule we make the weight decay term proportional to the input of the network. If we make the decay rate  equal to the learning rate , Vector Form:

13 24 Example - Pineapple Recall

13 25 Definitions

13 26 Iteration 1  = 1

13 27 Convergence