Download presentation
Presentation is loading. Please wait.
2
Constructing a Large Node Chow-Liu Tree Based on Frequent Itemsets Kaizhu Huang, Irwin King, Michael R. Lyu Multimedia Information Processing Laboratory The Chinese University of Hong Kong Shatin, NT. Hong Kong {kzhuang, king, lyu}@cse.cuhk.edu.hk ICONIP2002, November 19, 2002 Orchid Country ClubOrchid Country Club, SingaporeSingapore
3
Outline Background Probabilistic Classifiers Chow-Liu Tree Motivation Large Node Chow-Liu tree Experimental Results Conclusion
4
A Typical Classification Problem Given a set of symptoms, one wants to find out whether these symptoms give rise to a particular disease.
5
Probabilistic Classifiers The classification function is defined as: Background a constant for a given instance of A 1,A 2,…A n The joint probability is not easily estimated from the dataset; thus the assumption about the distribution has to be made, dependence or independence relationship among variables.
6
A tree dependence structure Background Chow-Liu Tree (CLT) Assumption: a dependence tree exists among the variables, given the class variable C.
7
Background Chow-Liu Tree Advantages Comparable with some of the state-of-the-art classifiers. The tree structure enables it a resistance to the over-fitting problem and a decomposition characteristic. Disadvantages It cannot model non-tree dependence relationship among attributes or variables. Class Variable
8
Motivation 1.Fig. (b) can represent the same independence relationship as Fig. (a): Given B and E, there is an independence relationship among A, C, and D. 2.Fig. (b) is still a tree structure, which inherits the advantages of a tree. Class Variable 3.By combining several nodes, a large node tree structure can represent a non-tree structure. This motivates our Large Node Chow-Liu tree approach.
9
Overview of Large Node Chow-Liu Tree (LNCLT) Step 1. Draft the Chow-Liu tree Draft the CL-tree of the dataset according to the CLT algorithm. Underlying structure Step 1:draft the Chow- Liu tree Step 2. Refine the Chow-Liu tree based on some combination rules Refine the Chow-Liu tree into a large node Chow-Liu tree based on some combination rules Step 2:draft the Chow- Liu tree The same independence relationship
10
Combination Rules Bounded cardinality The cardinality of each large node should not greater than a bound “k”. Frequent Itemsets Each large node should be Frequent itemset. Father-son or sibling relationship The nodes in a large node should be a father-son or sibling relationship.
11
Combination Rules (1) Bounded Cardinality The cardinality of each large node ( the number of nodes in a large node) should not greater than a bound “k”. An example is that: if we set “k” as the number of the attributes or variables, the LNCLT will be a “one large node tree”, which will lose all the merits as a tree. “One node tree” will lose all the merits of the “tree”.
12
Combination Rules (2) Frequent Itemsets Food store example: In a food store, if you buy {bread}, it will be highly possible for you to buy {butter}. Thus {bread, butter} is called a frequent itemset. Frequent Itemsets are possible “large nodes”, since the attributes in a Frequent Itemset act just like one “attribute”— they occur with each other frequently at the same time. Frequent Itemset is the set of attributes that occur with each other frequently.
13
Combination Rules (3) Father-son Combination Father-son or sibling relationship Combining Father-son and sibling nodes will increase the data fitness of the tree structure on the datasets (Proved in the paper). Combining Father-son and sibling nodes will maintain the graphical structure as a “tree structure”. Sibling Combination Combining non-father or non-sibling nodes may result in a non-tree structure
14
Constructing Large Node Chow-Liu Tree 1.Generate the frequent itemsets Call Apriori[AS94] to generate the frequent itemsets, which have the size less than k. Record all the frequent itemsets together with their frequnecy into list L. 2.Draft the Chow-Liu tree Draft the CL-tree of the dataset according to the CLT algorithm. 3.Combine nodes based on Combining rules Iteratively combine the frequent itemset with maximum frequency, which satisfy the combination conditions: father-son or sibling relationship until L is NULL.
15
Example:Constructing LNCLT Example: We assume the k is 2, after step 1, we get the frequent itemsets {A, B} {A, C},{B, C}, {B, E}, {B, D}, {D, E}. And f({B, C})>f({A, B})> f({B, E}) >f({B, D})>f({D, E}) (f(*) represents the frequency of frequent itemsets). (b) is the CLT in step2. Example: We assume the k is 2, after step 1, we get the frequent itemsets {A, B} {A, C},{B, C}, {B, E}, {B, D}, {D, E}. And f({B, C})>f({A, B})> f({B, E}) >f({B, D})>f({D, E}) (f(*) represents the frequency of frequent itemsets). (b) is the CLT in step2. 1.{A,C} does not satisfy the combination condition, filter out {A,C} 2.f({B,C}) is the biggest and satisfies combination condition, combine them into (c) 3..Filter the frequent itemsets which have coverage with {B,C}, the {D,E} is left. 4.{D, E } is the frequent itemset and satisfies the combination condition, combine them into (d)
16
Experimental Setup Dataset: MNIST-handwritten digit (28*28 gray-level bitmap) database training dataset size: 60000 testing dataset size: 10000 Experimental Environments Platform: win2000 Developing tool: Visual C++ 6.0
17
Experiments Data fitness Comparison
18
Experiments Data fitness Comparison
19
Experiments Recognition Rate
20
Future Work Evaluate our algorithm extensively in other benchmark datasets. Examine other combining rules.
21
Conclusion A novel Large Node Chow-Liu tree is constructed based on Frequent Itemsets. LNCLT can partially overcome the disadvantages of CLT, i.e., inability to represent non-tree structures. We demonstrate that our LNCLT model has a better data fitness and a better prediction accuracy theoretically and experimentally.
22
Main References [AS1994] R. Agrawal, R. Srikant, 1994,“Fast algorithms for mining association rules”, Proc. VLDB-94 1994. [Chow, Liu1968] Chow, C.K. and Liu, C.N. (1968). Approximating discrete probability distributions with dependence trees. IEEE Trans. on Information Theory, 14,(pp462-467) [Friedman1997] Friedman, N., Geiger, D. and Goldszmidt, M. (1997). Bayesian Network Classifiers. Machine Learning, 29,(pp.131-161). [Cheng1997] Cheng, J. Bell, D.A. Liu, W. 1997, Learning Belief Networks from Data: An Information Theory Based Approach. In Proceedings of ACM CIKM’97 [Cheng2001] Cheng, J. and Greiner, R. 2001, Learning Bayesian Belief Network Classifiers: Algorithms and System, E.Stroulia and S. Matwin(Eds.): AI 2001, LNAI 2056, (pp.141-151), Learning Bayesian Belief Network Classifiers: Algorithms and System, E.Stroulia and S. Matwin(Eds.): AI 2001, LNAI 2056, (pp.141-151).
23
Q & A Thanks.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.