Download presentation
Presentation is loading. Please wait.
1
Hebb and Perceptron
2
HEBB NETWORK Donald Hebb stated in 1949 that in the brain, the learning is performed by the change in the synaptic gap. Hebb explained it: “When an axon of cell A is near enough to excite cell B, and repeatedly or permanently takes place in firing it, some growth process or metabolic change takes place in one or both the cells such that A’s efficiency, as one of the cells firing B, is increased.”
3
HEBB LEARNING The weights between neurons whose activities are positively correlated are increased: The Hebb rule can be used for pattern association, pattern categorization, pattern classification and over a range of other areas. Hebb rule is more suited for bipolar data than binary data. If binary data is used ,above weight updation formula cannot distinguish two conditions namely: (1) A training pair in which which an input unit is ”off” and target value is “off”. (2) A training pair in which both the input unit and target value are “off”. There are limitations in hebb rule application over binary data.
4
Activation Input units
Start Initialize weights For each s:t 𝑁𝑜 𝑦𝑒𝑠 Activation Input units 𝑥 𝑖 = 𝑠 𝑖 Flowchart Activate output units y=t Weight updated 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 + 𝑥 𝑖 𝑦 Bais updated b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 +𝑦 Stop
5
ALGORITHM:- Step 0:-First Initialize the weights. Basically in this network they may be set to zero i.e., 𝑊 𝑖 =0 for i=1 to n where “n” may be the total number of input neurons Step 1:- Steps 2-4 have to be performed for each input training vector and target vector s:t Step 2:- Input units activation are set. Generally the activation function of input layer its identity function 𝑥 𝑖 = 𝑠 𝑖 for i=1 to n Step 3:- Output units activations are set y=t Step 4:-Weight adjustments and bias adjustments are performed 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 + 𝑥 𝑖 𝑦 b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 +𝑦 The above steps complete the algorithmic process. In step 4,the weight updation formula can also be given in the vector form as delta w= 𝑥 𝑖 𝑦 As a result, 𝑊 𝑛𝑒𝑤 = 𝑊 𝑜𝑙𝑑 +𝑑𝑒𝑙𝑡𝑎 𝑤
6
PERCEPTRON
7
Flowchart Start Initialize weights and bias Set a(0 to 1) 𝑦𝑒𝑠 If Y=t
For each s:t 𝑁𝑜 Activation Input units 𝑥 𝑖 = 𝑠 𝑖 𝑦𝑒𝑠 Activate net units yin Flowchart Apply Activation obtain Y=f (yin) If Y=t 𝑁𝑜 𝑦𝑒𝑠 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 +𝑎𝑡 𝑥 𝑖 b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 +𝑎𝑡 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 𝑦𝑒𝑠 If weight changes 𝑁𝑜 Stop
8
ALGORITHM:- Step 0:- Initialize the weights and the bias. Also initialize the learning rate Step 1:- Perform Steps 2-6 until the final stopping condition is false Steps 2-4 Perform Steps 3-5 for each training pair indicated by s:t Step 3:- The Input layer containing input unit is applied with identity activation functions: 𝑥 𝑖 = 𝑠 𝑖 Step 4:- Calculate the Output of the network. To do so, first obtain the net input Step 5:-Weight adjustments and bias adjustments are performed: Compare the value of the actual (performed) output and desired ( target) output If y=/ t then, 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 +𝑎𝑡 𝑥 𝑖 b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 +𝑎 Else we have 𝑊 𝑖 𝑛𝑒𝑤 = 𝑊 𝑖 𝑜𝑙𝑑 b 𝑛𝑒𝑤 =𝑏 𝑜𝑙𝑑 Step 6:-Train the network until there is no weight change. This is the stopping condition for the network. If the condition is not meet, then start again from step 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.