Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimenting with TLearn

Similar presentations


Presentation on theme: "Experimenting with TLearn"— Presentation transcript:

1 Experimenting with TLearn
CS/PY 231 Lab Presentation # 6 February 28, 2005 Mount Union College

2 Training Options TLearn user can select the type of training that should take place Choices affect the network connection weights and training time

3 Training Options Dialog Box

4 Random Seeding of Weights vs. Selective Seeding
Random Seeding: weights are randomly selected at the start of network training similar to an untrained brain (tabula rasa) Problem: during testing/debugging, random starting weights are unrepeatable Solution: select an arbitrary starting seed value for every connection

5 Sequential vs. Random Training
Sequential: Patterns are chosen from training set in order from the .DATA and .TEACH files method we used with Rosenblatt’s Algorithm Random: Patterns are chosen from training set at random, with replacement may give better performance in some situations (what are they?)

6 Learning Rate (η) a value (usually between 0.05 and 0.5) that determines how much of the error between obtained output and desired output will be used to adjust weights during backpropagation related to CURRENT training test

7 Evolution of Weights by Training
(m = # of training events) w0 w1 w2 . . . wm Δw1 Δwm-1 Δw0 wk+1 = wk + Δwk , where Δwk = η · δp · oj

8 Weight Adjustments: Learning
Note that only the current Δw affects the weights in a training event Sometimes it is useful to include the Δw from previous patterns, to avoid “forgetting” them during training otherwise, weights that solved a previous pattern may be changed too much To do this, add a new parameter to the Δw formula

9 Momentum (μ) μ represents the proportion of the weight change from the previous training pattern that affects the current weight change We will now have a new w formula: wk =  · p · oj + μ · wk-1 if μ is zero, we have training as before this is the default state for TLearn

10 Selected Nodes in the SPECIAL section of the .CF file:
meaning: the output of the selected nodes may be displayed when the “Probe Selected Nodes” option of the “Network” menu is activated selected = 2-3: node 1’s activations won’t be displayed

11 Sweeps and Epochs A training sweep is the authors’ term for learning using one pattern in the training set A training epoch means one pass over all patterns in the training set So if there are 7 lines of training data in the .DATA and .TEACH files, the epoch would involve 7 sweeps

12 Pattern vs. Batch Update
Pattern Update: adjust weights after each sweep Batch Update: collect errors for a group of sweeps in a batch, and use the average error for the group to adjust weights If we choose batch size = epoch size, weights will be adjusted once per epoch effects on training?

13 Experimenting with TLearn
CS/PY 231 Lab Presentation # 6 February 28, 2005 Mount Union College


Download ppt "Experimenting with TLearn"

Similar presentations


Ads by Google