Download presentation
Presentation is loading. Please wait.
Published byDwight Dalton Modified over 9 years ago
1
NeSy-2006, ECAI-06 Workshop 29 August, 2006, Riva del Garda, Italy Jim Prentzas & Ioannis Hatzilygeroudis Construction of Neurules from Training Examples: A Thorough Investigation University of Patras, Dept of Computer Engin. & Informatics & TEI of Lamia, Dept of Informatics & Computer Technology GREECE
2
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
3
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
4
Neurules: An Overview (1) Neurules integrate symbolic (propositional) rules and the adaline neural unit Give pre-eminence to the symbolic framework Neurules were initially designed as an improvement to propositional rules (as far as efficiency is concerned) and produced from them To facilitate knowledge acquisition, a method for producing neurules from empirical data was specified
5
Neurules: An Overview (2) Preserve naturalness and modularity of production rules in some (large?) degree Reduce the size of the produced knowledge base Increase inference efficiency Allow for efficient and natural explanations
6
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
7
Neurules: Syntax and Semantics (1) C i : conditions (‘fever is high’) D: conclusion (‘disease is inflammation’) sf 0, sf i : bias, significance factors (sf 0 ) if C 1 (sf 1 ), C 2 (sf 2 ), …, C n (sf n ) then D
8
Neurules: Syntax and Semantics (2)... D (sf 1 ) (sf 2 ) (sf n ) (sf 0 ) C 1 C 2 C n 1 f(x) x C i {1, -1, 0} {true, false, unknown} D {1, -1} {success, failure}
9
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
10
Initial Neurules Construction 1.Make one initial neurule for each possible conclusion either intermediate or final (i.e. for each value of intermediate and output attributes according to dependency information). 2.The conditions of each initial neurule include all attributes that affect its conclusion, according to dependency information, and all their values. 3.Set the bias and significant factors to some initial values (e.g. 0).
11
Neurules Production Process 1.Use dependency information to construct the initial neurules. 2.For each initial neurule create its training set from the available empirical data. 3. Train each initial neurule with its training set 3.1 If the training succeeds, produce the resulted neurule 3.2 If not, split the training set in two subsets of close examples and apply recursively steps 3.1 and 3.2 for each subset.
12
Splitting Strategies (1) DEFINITIONS training example: [v 1 v 2 … v n d] success example: d = 1, failure example: d = -1 closeness: the number of common v i between two success examples least closeness pair (LCP): a pair of success examples with the least closeness in a training set
13
Splitting Strategies (2) REQUIREMENTS 1.Each subset contains all failure examples, to avoid misactivations 2.Each subset contains at least one success example, to assure activation of the corresponding neurule 3.The two subsets should not contain common success examples, to avoid activation of more than one neurule for the same data
14
Splitting Strategies (1) STRATEGY: CLOSENESS-SPLIT 1.Find the LCPs of the training set S and choose one. Its elements are called pivots. 2.Create two subsets of S, each containing one of the pivots and the success examples of S that are closer to its pivot. 3.Insert in both subsets all the failure examples of S. 4.Train two copies of the initial neurule, one with each subset.
15
Splitting Strategies (2) C1C2C3C4C5C6C7C8C9C10C11C12C13D 1 111 1 1 1111 1 11 1 11 1 1 11 1 1 11 1 111 11 1 1 11 1 1 1 11 1 1 1 11 1 1 1 1 11 1 1 11 1 1 11 1 1 1 11 1 1 1 1 1111 1 1111 1 1111 1 11111 1 P1 P2 P3 P4 P5 AN EXAMPLE: Training Set
16
Splitting Strategies (3) P1-P5: Success examples, F: Set of failure examples AN EXAMPLE: Splitting Tree
17
Splitting Strategies (4) (-13.5) if venous-conc is slight (12.4), venous-conc is moderate (8.2), venous-conc is normal (8.0), venous-conc is high (1.2), blood-conc is moderate (11.6), blood-conc is slight (8.3), blood-conc is normal (4.4), blood-conc is high (1.6), arterial-conc is moderate (8.8), arterial-conc is slight (-5.7), cap-conc is moderate (8.4), cap-conc is slight (4.5), scan-conc is normal (8.4) then disease is inflammation AN EXAMPLE: A produced neurule
18
Splitting Strategies (5) STRATEGY: ALTERN-SPLIT1 1.If all success examples or only failure examples are misclassified, use closeness based split. 2.If some of the success examples and none of the failure examples are misclassified, split the training set in two subsets: one containing the correctly classified success examples and one containing the misclassified success examples. Add all failure examples in both subsets. 3.If some (not all) of the success and some or all of the failure examples are misclassified, split the training set in two subsets: one containing the correctly classified success examples and the other the misclassified success examples. Add all failure examples in both subsets.
19
Splitting Strategies (6) STRATEGY: ALTERN-SPLIT2 1.If all success examples or only failure examples are misclassified, use closeness based split. 2.If some of the success examples and none of the failure examples are misclassified, split the training set in two subsets: one containing the correctly classified success examples and one containing the misclassified success examples. Add all failure examples in both subsets. 3.If some of the success and failure examples are misclassified, use closeness based split.
20
Splitting Strategies (7) LCP SELECTION HEURISTICS Random Choice (RC) –Pick up an LCP at random Best Distribution (BD) –Choose the LCP that results in distribution of the elements of the other LCPs in different subsets Mean Closeness (MC) –Choose the LCP that creates subsets with the greatest mean closeness
21
Experimental Results (1) Dataset CLOSENESS-SPLITALTERN-SPLIT1ALTERN-SPLIT2 RCMCBDRCMCBDRCMCBD Monks1_train (124) 17 132224 191613 Monks2_train (169) 464738343233434939 Monks3_train (122) 14111215 141113 Tic-Tac-Toe (958) 26 24444140434138 Car (1728) 151163153189171169152161154 Nursery (12960) 830839823133013821378837842821 Comparing LCP Selection Heuristics
22
Experimental Results (1) Dataset CLOSENESS-SPLITALTERN-SPLIT1ALTERN-SPLIT2 RCMCBDRCMCBDRCMCBD Monks1_train (124) 17 132224 191613 Monks2_train (169) 464738343233434939 Monks3_train (122) 14111215 141113 Tic-Tac-Toe (958) 26 24444140434138 Car (1728) 151163153189171169152161154 Nursery (12960) 830839823133013821378837842821 Comparing LCP Selection Heuristics
23
Experimental Results (2) Dataset CLOSENESS-SPLITALTERN-SPLIT1ALTERN-SPLIT2 RCMCBDRCMCBDRCMCBD Monks1_train (124) 17 132224 191613 Monks2_train (169) 464738343233434939 Monks3_train (122) 14111215 141113 Tic-Tac-Toe (958) 26 24444140434138 Car (1728) 151163153189171169152161154 Nursery (12960) 830839823133013821378837842821 Comparing Splitting Strategies
24
Experimental Results (3) LCP Selection Heuristics –BD performs better in most cases –MC although is the computationally most expensive is rather the worst –RC although the simplest does quite well Splitting Strategies –CLOSENESS-SPLIT does better than the others –ALTER-SPLIT2 does better than ALTER-SPLIT1 –The ‘closeness’ heuristic is proved to be a good choice
25
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
26
Neurules and Generalization Generalization is an important characteristic of NN-based systems Neurules never tested as far as their generalization capabilities are concerned, due to the way they were used We present here an investigation of their generalization capabilities in comparison with the Adaline Unit and the BPNN We use the same data sets used for the comparison of the strategies
27
Experimental Results (1) Impact of LCP Selection Heuristics on Generalization Dataset RCMCBD Monks1100% Monks296.30%96.99%97.92% Monks392.36%93.52%96.06% Tic-Tac-Toe98.85%97.50%98.12% Car94.44%94.56%94.50% Nursery99.63%99.53%99.52%
28
Experimental Results (2) Neurules Generalization vs Adaline and BPNN Dataset Adaline UnitNeurulesBPNN Monks167.82%100% Monks243.75%97.92%100% Monks392.13%96.06%97.22% Tic-Tac-Toe61.90%98.85%98.23% Car78.93%94.56%95.72% Nursery82.26%99.63%
29
Experimental Results (3) LCP Selection Heuristics Impact –None of RC, MC, BD has a clearly better impact, but RC and BD seem to do better than MC Neurules –Do quite better than Adaline itself –Less good, but very close to BPNN –Creation of the BPNN more time consuming than that of a neurule base
30
Outline Neurules: An overview Neurules: Syntax and semantics Production process-Splitting Neurules and Generalization Conclusions
31
The “closeness” heuristic used in the process of production of neurules is proved to be quite effective The random choice selection heuristic does adequately well Neurules generalize quite well
32
Future Plans Compare ‘closeness’ with other known machine learning heuristics (e.g. distance- based heuristics) Use neurules for rule extraction
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.