Example, perceptron learning function AND Training samples Initial weights W(0) Learning rate = 1 Present p1 net = (0, 2, 0)(1, -1, 1) = -2 no learning occurs in_0 in_1 in_2 d p0 1 -1 p1 p2 p3 Present p2 net = (0, 2, 0)(1, 1, -1) = 2 x = (-1)(1, 1, -1) = (-1, -1, 1) W(2) = (0, 2, 0) + (-1, -1, 1) = (-1, 1, 1) w0 w1 w2 1 -1 Present p3 net = (-1, 1, 1)(1, 1, 1) = 1 no learning occurs Present p0 net = W(0)p0 = (1, 1, -1)(1, -1, -1) =1 p0 misclassified, learning occurs x =d p0 = (-1, 1, 1) W(1) = W(0) + x = (0, 2, 0) New net = W(1)p0 = -2 is closer to target (d = -1) Present p0, p1, p2, p3 All correctly classified with W(2) Learning stops with W(2)
x o W(0) = (1, 1, -1) x o W(1) = (0, 2, 0) x o W(2) = (-1, 1, 1)
Example, learning function AND by delta rule Training samples Initial weights W(0) Learning rate = 0.3 Present p0 net = (1, 1, -1)(1, -1, -1) = 1 ∆W = 0.3(d – net) p0 = (-0.6, 0.6, 0.6) W(1) = W(0) + ∆W =(0.4, 1.6, -0.4) New net = W(1)p0 = -0.8 is closer to target (d = -1) than before in_0 in_1 in_2 d p0 1 -1 p1 p2 p3 w0 w1 w2 1 -1
W(k) w0 w1 w2 net1 d_out d - net 1 -1 -2 0.4 1.6 -0.4 -1.6 0.6 2 0.58 1.42 -0.22 2.22 -3.22 3 -0.386 0.454 0.746 0.814 0.186 4 -0.3302 0.5098 0.8018 -1.6418 0.6418 5 -0.13766 0.31726 0.60926 0.15434 -1.15434 6 -0.48396 0.663562 0.262958 -0.08336 -0.91664 7 -0.75895 0.388569 0.537951 0.167565 0.832435 8 -0.50922 0.6383 0.787681 -1.9352 0.935205 9 -0.22866 0.357738 0.507119 -0.07928 -0.92072 10 -0.50488 0.633954 0.230904 -0.10183 -0.89817 11 -0.77433 0.364502 0.500355 0.090528 0.909472 12 -0.50149 0.637344 0.773197 -1.91203 0.912029 13 -0.22788 0.363735 0.499588 -0.09203 -0.90797 14 -0.50027 0.636127 0.227196 -0.09134 -0.90866 15 -0.77287 0.363529 0.499794 0.090454 0.909546
W(0) = (1, 1, -1) W(1) = (04, 1.6, -0.4) W(15) = (-77, 0.36, 0.5) x x o W(0) = (1, 1, -1) x o W(1) = (04, 1.6, -0.4) x o W(15) = (-77, 0.36, 0.5)