Presentation is loading. Please wait.

Presentation is loading. Please wait.

W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan.

Similar presentations


Presentation on theme: "W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan."— Presentation transcript:

1

2 W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan

3 Matlab Code: plot

4 matrix W: vector p : a = f(Wp + b)

5 A one-layer network with R input elements and S neurons follows. weight matrix W

6 The S neuron R input one-layer network also can be drawn in abbreviated notation.

7

8 The network shown below has R1 inputs, S1 neurons in the first layer, S2 neurons in the second layer, etc. It is common for different layers to have different numbers of neurons. A constant input 1 is fed to the bias for each neuron.

9

10

11 Make the perceptrons with net = newp (PR,S,TF,LF) PR = Rx2 matrix of min and max values for R input elements S = number of output vector TF = Transfer function, default = ‘hardlim’, other option = ‘hardlims’ LF = Learning function, default = ‘learnp’, other option = ‘learnpn’ learnp    w = (t-a)p T = ep T learnpn  normalized learnp hardlim = hardlimit function hardlims = symetric hardlimit function W new = W old +  Wb new = b old + e where e = t - a

12 Matlab code: P = [0 0 1 1; 0 1 0 1]; T = [0 0 0 1]; net = newp([0 1; 0 1],1); weight_init = net.IW{1,1} bias_init = net.b{1} net.trainParam.epochs = 20; net = train(net,P,T); weight_final = net.IW{1,1} bias_final = net.b{1} simulation = sim(net,P) weight_init = [0 0], bias_init = 0 weight_final = [2 1], bias_final = -3

13 Matlab code: P = [0 0 1 1; 0 1 0 1]; T = [0 1 1 1]; net = newp([0 1; 0 1],1); weight_init = net.IW{1,1} bias_init = net.b{1} net.trainParam.epochs = 20; net = train(net,P,T); weight_final = net.IW{1,1} bias_final = net.b{1} simulation = sim(net,P) weight_init = [0 0], bias_init = 0 weight_final = [1 1], bias_final = -1

14 Make the Linear Filter with newlin (PR,S,ID,LR) PR = Rx2 matrix of min and max values for R input elements S = number of output vector ID = delay LR = Learning Rate Transfer function for linear filter is only linear line (purelin)

15 Matlab Code: To set up this feedforward network, use the following command: P = [1 2 2 3; 2 1 3 1]; T = [-100 50 50 100]; net = newlin (P,T); net.IW{1,1} = [1 2]; net.b{1} = 0; A = sim (net,P) hasil Target = [-100 50 50 100];

16 Matlab Code: P = {1 2 3 4}; T = {10, 3, 7}; net = newlin(P,T,[0 1]); net.biasConnect = 0; net.IW{1,1} = [1 2]; A = sim(net,P) Hasil: A

17 Matlab Code P = {[1 4] [2 3] [3 2] [4 1]}; T = {10, 3, 7}; net = newlin(P,T,[0 1]); net.biasConnect = 0; net.IW{1,1} = [1,2] A = sim(net,P); Hasil: A = {[1 4] [4 11] [7 8] [10 5]} output sequence produced by the first input sequence the output sequence produced by the second input sequence

18 PA = {[1 4] [2 3] [3 2] [4 1]}; PB = {[1 0 1 0] [0 1 1 1] [0 0 1 1] [1 1 0 1] [0 1 0 0] }; Example :

19 Incremental Training with Static Networks Matlab code: P = {[1;2] [2;1] [2;3] [3;1]}; T = {4 5 7 7}; net = newlin(P,T,0,0); net.IW{1,1} = [0 0]; net.b{1} = 0; %train the network incrementally [net,a,e,pf] = adapt (net,P,T); net.inputWeights{1,1}.learnParam.lr=0.1; net.biases{1,1}.learnParam.lr=0.1; [net,a,e,pf] = adapt (net,P,T); hasil Learning rate = 0.1 Bias = 0.1 hasil

20 Matlab Code: Pi = {1}; P = {2 3 4}; T = {3 5 7}; % Create a linear network with one delay at the input, Initialize the weights to zero and set the learning rate to 0.1. net = newlin (P,T,[0 1],0.1); net.IW{1,1} = [0 0]; net.biasConnect = 0; % now sequentially train the network using adapt [ net,a,e,pf] = adapt (net,P,T,Pi); Hasil:

21 Batch training, in which weights and biases are only updated after all the inputs and targets are presented, can be applied to both static and dynamic networks. Matlab code: P = [1 2 2 3; 2 1 3 1]; T = [4 5 7 7]; net = newlin(P,T,0,0.1); net.IW{1,1} = [0 0]; net.b{1} = 0; [net,a,e,pf] = adapt(net,P,T); Hasil: Learning rate = 0.1 w11 = 4.9; w12 = 4.1 b = 2.3

22 Matlab code : P = [1 2 2 3; 2 1 3 1]; T = [4 5 7 7]; net = newlin(P,T,0,0.1); net.IW{1,1} = [0 0]; net.b{1} = 0; net.inputWeights{1,1}.learnParam.lr = 0.1; net.biases{1}.learnParam.lr = 0.1; net.trainParam.epochs = 1; net = train (net,P,T); Hasil:

23 Matlab code : Pi = {1}; P = {2 3 4}; T = {3 5 6}; net = newlin(P,T,[0 1],0.02); net.IW{1,1} = [0 0]; net.biasConnect = 0; net.trainParam.epochs = 1; net = train (net,P,T,Pi); Hasil:

24


Download ppt "W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan."

Similar presentations


Ads by Google