Download presentation
Presentation is loading. Please wait.
Published byAvery Linch Modified over 10 years ago
1
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College
2
More Realistic Models n So far, our perceptron activation function is quite simplistic n f (x 1, x 2 ) = 1, if x k ·w k > , or n = 0, if x k ·w k < n To more closely mimic actual neuronal function, our model needs to become more complex
3
Problem # 1: Need more than 2 input connections n addressed last time: activation function becomes f (x 1, x 2, x 3,..., x n ) n vector and summation notation help with writing and describing the calculation being performed
4
Problem # 2: Output too Simplistic n Perceptron output only changes when an input, weight or theta changes n Neurons don’t send a steady signal (a 1 output) until input stimulus changes, and keep the signal flowing constantly n Action potential is generated quickly when threshold is reached, and then charge dissipates rapidly
5
Problem # 2: Output too Simplistic n when a stimulus is present for a long time, the neuron fires again and again at a rapid rate n when little or no stimulus is present, few if any signals are sent n over a fixed amount of time, neuronal activity is more of a firing frequency than a 1 or 0 value (a lot of firing or a little)
6
Problem # 2: Output too Simplistic n to model this, we allow our artificial neurons to produce a graded activity level as output (some real number) n doesn’t affect the validity of the model (we could construct an equivalent network of 0/1 perceptrons) n advantage of this approach: same results with smaller network
7
Output Graph for 0/1 Perceptron Σ x k · w k 0 1 output θ
8
LIMIT function: More Realism n Define a function with absolute minimum and maximum output values (say 0 and 1) n Establish two thresholds: lower and upper n f (x 1, x 2,..., x n ) = 1, if x k ·w k > upper, n = 0, if x k ·w k < lower, or n some linear function between 0 and 1, otherwise
9
Output Graph for LIMIT function Σ x k · w k 0 1 output θ lower θ upper
10
Sigmoid Ftns: Most Realistic n Actual neuronal activity patterns (observed by experiment) give rise to non-linear behavior between max & min n example: logistic function –f (x 1, x 2,..., x n ) = 1 / (1 + e - xk·wk ), where e 2.71828... n example: arctangent function –f (x 1, x 2,..., x n ) = arctan( x k ·w k ) / ( / 2)
11
Output Graph for Sigmoid ftn Σ x k · w k 0 1 output 0
12
TLearn Activation Function n The software simulator we will use in this course is called TLearn n Each artificial neuron (node) in our networks will use the logistic function as its activation function n gives realistic network performance over a wide range of possible inputs
13
TLearn Activation Function n Table, p. 9 (Plunkett & Elman) Input Activation -2.00 0.119 0.50 0.622 -1.50 0.182 1.00 0.731 -1.00 0.269 1.50 0.818 -0.50 0.378 2.00 0.881 0.00 0.500
14
TLearn Activation Function n output will almost never be exactly 0 or exactly 1 n reason: logistic function approaches, but never quite reaches these maximum and minimum values, for any input from - to n limited precision of computer memory will enable us to reach 0 and 1 sometimes
15
Automatic Training in Networks n We’ve seen: manually adjusting weights to obtain desired outputs is difficult n What do biological systems do? –if output is unacceptable (wrong), some adjustment is made in the system n how do we know it is wrong? Feedback –pain, bad taste, discordant sound, observing that desired results were not obtained, etc.
16
Learning via Feedback n Weights (connection strengths) are modified so that next time the same input is encountered, better results may be obtained n How much adjustment should be made? –different approaches yield various results –goal: automatic (simple) rule that is applied during weight adjustment phase
17
Rosenblatt’s Training Algorithm n Developed for Perceptrons (1958) –illustrative of other training rules; simple n Consider a single perceptron, with 0/1 output n We will work with a training set –a set of inputs for which we know the correct output n weights will be adjusted based on correctness of obtained output
18
Rosenblatt’s Training Algorithm n for each input pattern in the training set, do the following: n obtain output from perceptron n if output is correct: (strengthen) –if output is 1, set w = w + x –if output is 0, set w = w - x n but if output is incorrect: (weaken) –if output is 1, set w = w - x –if output is 0, set w = w + x
19
Example of Rosenblatt’s Training Algorithm n Training data: x1 x2 out 0 1 1 1 1 1 1 0 0 n Pick random values as starting weights and θ: w1 = 0.5, w2 = -0.4, θ = 0.0
20
Example of Rosenblatt’s Training Algorithm n Step 1: run first training case through a perceptron x1 x2 out 0 1 1 n (0, 1) should give answer 1 (from table), but perceptron produces 0 n do we strengthen or weaken? n do we add or subtract? –based on answer produced by perceptron!
21
Example of Rosenblatt’s Training Algorithm n obtained answer is wrong, and is 0: we must ADD input vector to weight vector n new weight vector: (0.5, 0.6) w1 = 0.5 + 0 = 0.5 w2 = -0.4 + 1 = 0.6 n Adjust weights in perceptron now, and try next entry in training data set
22
Example of Rosenblatt’s Training Algorithm n Step 2: run second training case through a perceptron x1 x2 out 1 1 1 n (1, 1) should give answer 1 (from table), and it does! n do we strengthen or weaken? n do we + or -?
23
Example of Rosenblatt’s Training Algorithm n obtained answer is correct, and is 1: we must ADD input vector to weight vector n new weight vector: (1.5, 1.6) w1 = 0.5 + 1 = 1.5 w2 = 0.6 + 1 = 1.6 n Adjust weights, then on to training case # 3
24
Example of Rosenblatt’s Training Algorithm n Step 3: run last training case through the perceptron x1 x2 out 1 0 0 n (1, 0) should give answer 0 (from table); does it? n do we strengthen or weaken? n do we + or -?
25
Example of Rosenblatt’s Training Algorithm n determine what to do, and calculate a new weight vector n should have SUBTRACTED n new weight vector: (0.5, 1.6) w1 = 1.5 - 1 = 0.5 w2 = 1.6 - 0 = 1.6 n Adjust weights, then try all three training cases again
26
Ending Training n This training process continues until: n perceptron gives correct answers for all training cases, or n a maximum number of training passes has been carried out –some training sets may be impossible for a perceptron to calculate (e.g., XOR ftn.) n In actual practice, we train until the error is less than an acceptable level
27
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.