Download presentation
Presentation is loading. Please wait.
Published byKathleen Berry Modified over 6 years ago
1
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
COURSE NEURAL NETWORKS SATENDRA Assistant Professor SITM,REWARI . Manav Rachna College of Engineering
2
TOPIC Fundamental concepts of Artificial Neural Networks
Content FeedForward & Feedback Networks Learning Rules: Supervised & Unsupervised Hebbian Learning Rule, Perception Learning Rule, Delta Learning Rule, Widrow-Hoff Learning Rule Correction Learning Rule Winner –Take All Learning Rule Manav Rachna College of Engineering
3
Neural Network Architectures
4
Neural Network Learning Process
5
Neural Network Learning Rules
The learning signal r in general a function of wi, x and sometimes of teacher’s signal di. Incremental weight vector wi at step t becomes: Where c is a learning constant having +ve value.
6
NN Learning Rule 1) Hebbian Learning Rule
The Hebbian learning rule represents a purely feedforward, unsupervised learning. This learning rule requires the weight initialization at small random values around wi = 0 prior to learning. Here, Learning signal is as neuron’s output. Incremental weight vector becomes:- For single weight wij
7
2) Perceptron Learning Rule:-
For the perceptron learning rule, the learning signal is between the desired and actual neuron's response. This learning is supervised learning. This rule is applicable for binary neuron response and relationships express the rule for the bipolar binary case. Learning Signal r becomes Weight adjustment becomes:-
8
Where η is a positive constant.
3) Delta Training Rule This rule valid for only continuous activation functions It is of supervised training mode. The learning signal of this rule is called delta and defined as Calculating the gradient vector with respect to wi of the squared error defined as Error gradient vector Since minimization of error requires the weight changes to be in the negative gradient direction, therefore Where η is a positive constant.
9
4) Windrow-Hoff Learning Rule
The Widrow-Hoff learning is applicable for the supervised training of neural networks. It is independent of the activation function of neurons used since it minimizes the squared error between the desired output value d, and the neuron's activation value net The learning rule is defined as The weight vector increment under this learning rule is for the single weight the adjustment is This rule is also called LMS (least Mean square) learning rule. Weights are initialized at any values in this method.
10
5) Correlation Learning Rule
This rule is typically applied for recording data in memory networks with binary response neurons. This is supervised learning rule. This learning rule also requires the weight initialization w = 0. As we know, general learning rule is :- And by substitution r=di into general rule, we get correlation rule. The adjustment for weight vector and single weights resp. are
11
6) Winner –Take- All Learning Rule
It is used for unsupervised network training. The learning is based on the premise that one of the neurons in the layer say m’th has maximum response due to input x. This neuron is declared as Winner. As a result of this winning event, the weight vector Wm becomes Incremental weight adjustment Where α > 0 is a small learning constant. The winner selection is based on the following criterion of maximum activation among all p neurons participating in a competition
12
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.