Download presentation
Presentation is loading. Please wait.
1
2 1 Neuron Model and Network Architectures
2
2 2 Biological Inspiration
3
2 3 Neuron Model a 1 ~a n 為輸入向量的各個分量 w 1 ~w n 為神經元各個突觸的權值 b 為偏差 f 為傳遞函數,通常為非線性函數。 例如: hardlim(n) , n 正為 1 ,其餘 0 t 為神經元輸出
4
2 4 Notation Scalars-small italic letters : a,b,c Vectors-small bold nonitalic letters : a,b,c Matrices-capital BOLD nonitalic letters : A,B,C Input-p,p,P Weight-w,w,W Bias-b,b Output-a,a,a(t)
5
2 5 Single-Input Neuron 例 1 : w=3,p=2 and b=-1.5 then a=f(3(2)-1.5)=f(4.5)
6
2 6 Transfer Functions 例 2 : w=3, p=2 and b=-1.5 then a=hardlim(3(2)-1.5)=hardlim(4.5)=1 a=0 n<0 a=1 n>=0
7
2 7 Transfer Functions 例 3 : w=3, p=2 and b=-1.5 then a=purelin(3(2)-1.5)=purelin(4.5)=4.5
8
2 8 Transfer Functions 例 4 : w=3, p=2 and b=-1.5 then a=logsig(3(2)-1.5)=logsig(4.5)=
9
2 9 Transfer Functions
10
2 10 0<=a<=1 -1<=a<=1
11
2 11 Multiple-Input Neuron Abbreviated Notation Neuron With R Inputs
12
2 12 Example P2.3 Given a two-input neuron with the following parameters: b=1.2, W= [ 3 2 ] and p= [ -5 6 ] T, calculate the neuron output for the following transfer functions: i. A symmetrical hard limit transfer function ii. A saturating linear transfer function iii. A hyperbolic tangent sigmoid(tansig) transfer function i. a=hardlims(-1.8)= -1 ii. a=satlin(-1.8)= 0 iii. a=tansig(-1.8)=
13
2 13 Layer of S Neurons R Input S Output i.e.,R≠S Layer of S Neurons
14
2 14 Abbreviated Notation W w 11 w 12 w 1R w 21 w 22 w 2R w S1 w S2 w SR = b 1 2 S = b b b p p 1 p 2 p R = a a 1 a 2 a S =
15
2 15 Multiple Layers of Neurons Three-Layer Network
16
2 16 Abbreviated Notation Hidden LayersOutput Layer
17
2 17 Delays and Integrators a(0)=a(0) a(1)=u(0)
18
2 18 Recurrent Network
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.