Network Architectures Neuron Model and Network Architectures
Single-Input Neuron Inputs General Neuron Adjustable a = f(wp+b) Example: Suppose w = 3, p = 2, and b = -1.5, then a = f( 3(2) – 1.5) = f(4.5)
Transfer Functions a = hardlim(n) a = hardlim(wp+b) a = purlin(wp+b) hardlim transfer function a = hardlim(wp+b) Single neuron hardlim network a = purlin(wp+b) Single neuron purlin network a = purlin(n) linear transfer function
Transfer Functions a = logsig(wp+b) a = logsig(n) Demo: nnd2n1 Single neuron logsig network a = logsig(n) logsig transfer function Demo: nnd2n1
Multiple-Input Neuron n = w1,1 p1 + w1,2 p2 + …+w1,R pR + b We will show this in matrix form as: n = Wp + b Then, a can be written as: f(Wp + b) Abbreviated Notation Demo: nnd2n2
Layer of Neurons Number of inputs to a layer can be different from the number of neurons. Neurons in a layer may have different transfer functions. f(Wp + b)
Abbreviated Notation W p b a w w ¼ w w w ¼ w = w w ¼ w p b a p b a = = 1 , 1 1 , 2 1 , R w w ¼ w W = 2 , 1 2 , 2 2 , R w w ¼ w S , 1 S , 2 S , R p b a 1 1 1 p b a p = 2 b = 2 a = 2 p b a R S S
Multilayer Network
a3 = f3(W3f2( W2f1(W1p+ b1)+b2)+b3) Abbreviated Notation Hidden Layers Output Layer a1 = f1(W1p + b1) a2 = f2(W2a1 + b2) a3 = f3(W3a2 + b3) a3 = f3(W3f2( W2f1(W1p+ b1)+b2)+b3) Layer number Neuron Number Input number
Delays and Integrators
Recurrent Network