Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Neural Network

Similar presentations


Presentation on theme: "Artificial Neural Network"— Presentation transcript:

1 Artificial Neural Network
Computational Systems Biology Young-Mook Kang

2 Neuron

3 Linear Learning W(t+1) is a better decision vector than W(t)
Because W(t+1) classifies only two objects falsely, compared with five by W(t). The perfect decision vector W should separate all objects of category C1(full circles) from those of category C2(empty circles)

4 Bias Introducing the bias, δ, to the decision function separating D from the points A, B and C The family of straight line function y=ax can not separate the class of points A, B and C from the class represented by point D.

5 Bias 1

6 Transfer Functions 1 Hard-limiter Threshold logic Sigmoidal Function
Output = TF(Net)

7 Hard-limiter A binary hard limiter A bipolar hard-limiter

8 Threshold logic

9 Sigmoidal Function The most widely used transfer function in various neural network applications is the so called sigmoidal function, sf.

10 Architecture Layer Learning method One Layer Networks
Hopfield Network, Kohonen network(Self organization map), … Multi Layer Network Counter-Propagation, Back-Propagation of Errors … Learning method Unsupervised Competitive learning Kohonen network(Self organization map), … Supervised Competitive learning Back-Propagation of Errors …

11 Architecture One Layer Networks Multi Layer Network

12 Architecture Unsupervised learning Supervised Competitive learning

13 Ex1 ) Application of QSAR
Used Learning method Back-propagation of Error Used Compounds and Property Training Set : 28 , Prediction Set : 55 compounds. Property : AIT (Auto-Ignition Temperature ) Used Descriptors

14 Ex1 ) Application of QSAR
Back-propagation of Error

15 Learning Algorithm of BP
만약 라면 최종적 Weight 변화는 :

16 Selected Architecture
Training 회수 400r 과 21 개의 Neurons 을 가진 Hidden Layer

17 Result AEP = 3.93 AEP = 24.53 R2 = 0.9692 (R = 0.9845)
Training Set Predict Set RMS = 5.37 AEP = 3.93 R2 = (R = ) RMS = 29.15 AEP = 24.53 R2 = (R = )

18 Ex 2) Application of QSAR
Used Compounds S = aqueous solubility of organic compounds, Used Unit : ( log S ) Training Set : 888 (range : ~ 1.58 ) Validation Set : 514 ( ~ 1.56 ) Test Set : 368 ( ~ 1.22)

19 Genetic Functional Approximation
Selected Descriptor name selected account 2D-VSA hydrophobic 1000 ASlogP value 2D-VSA polar 506 Delta Chi 0 503 Atomic polarizabilities 499 I_adj_equ 489 2D-VSA negative charged groups 478 No. positive charged groups 384 Formal charge 123 No. negative charged groups 43 No. H-bond acceptors 27 Delta Chi 4 path/cluster 25 Delta Chi 3 cluster 21 Graph vertex complexity 17 Valence bound charge index 5 Chi 4 path/cluster Wiener index 15 2-MTI prime 13 Kier shape 1 No. Rotatable bonds 10 Population Size : 1,000 Generations : 100,000 Equation Length : 11 Original Descriptors : 125 Term Creation Types : 5 (625) Ex) Square, SQRT, Log2, exp, … Total Descriptors : 750 Generation Equation number R^2 Equation 1 Equation 2 Equation 3 Equation 4 Equation 5 Equation 6

20 Scaling Rule

21 Optimization Structure of ANN
Neurons of Hidden Layer = 30

22 Training set (eqn14) RMSE : AME : R^2 :

23 Prediction Set (eqn14) RMSE : AME : R^2 :

24 Test set (eqn14) RMSE : AME : R^2 :

25 Comparsion of logS with commercial prediction program
Name Reference/Method Calculated Compd. R2 Mean Absolute Error Range a (%) ASlogS PreADME / atom-additive method (2D structure) 1403 0.9100 0.480 90.02 logS prediction Accelrys Cerius2 4.6 C2-ADME / genetic partial least squere regression (using AlogP98 and FPSA) - 2D structure 1387 0.8039 0.688 75.77 QikProp QikProp 1.6 /Monte Carlo simulation & Multiple linear regresstion (3D structure) 1402 0.7894 0.725 75.96


Download ppt "Artificial Neural Network"

Similar presentations


Ads by Google