Mining Time-Changing Data Streams Advisor: Dr. Hsu Graduate: Yung-Chu Lin 2002/3/6 IDS Lab Seminar
Outline Motivation Objective Hoeffding Bounds The VFDT Algorithm The CVFDT Algorithm Window Size Time and Space Complexity Empirical Study Conclusion Opinion 2002/3/6 IDS Lab Seminar
Motivation The volume and time span of accumulated data for future use Real large database is not random sample drawn from a stationary 2002/3/6 IDS Lab Seminar
Objective Solving the classification problem: concept drift 2002/3/6 IDS Lab Seminar
Hoeffding Bounds n examples confidence 1-δ variable r, range R probability: R=1 information gain: R=log C 2002/3/6 IDS Lab Seminar
Concept of VFDT Algorithm Initialize the HT Repeat Scan nmin examples (a window size) Compute Gain(x) of every attribute Split the leaf node or not 2002/3/6 IDS Lab Seminar
The VFDT Algorithm 2002/3/6 IDS Lab Seminar
Example for Algorithms Day Outlook Tempera-ture Humidity Wind Play-Tennis D1 Sunny Hot High Weak No D2 Strong D3 Overcast Yes D4 Rain Mild D5 Cool Normal D6 D7 D8 D9 D10 D11 D12 D13 D14 2002/3/6 IDS Lab Seminar
Concept of CVFDT Algorithm An extension to VFDT, which adds the ability to detect and respond to changes in example-generating Not need to learn a new model from scratch every time a new example arrives Scan HT and alternate trees periodically look for internal nodes whose sufficient statistics indicate better attribute When alternate subtree becomes more accurate, the old subtree is replaced by the new one 2002/3/6 IDS Lab Seminar
The CVFDT Algorithm 2002/3/6 IDS Lab Seminar
The CVFDTGrow Procedure 2002/3/6 IDS Lab Seminar
The ForgetExample and CheckSplitValidity procedure 2002/3/6 IDS Lab Seminar
Window Size One windows size w will not be appropriate for every concept and every type of drift Shrink w when many of the nodes in HT become questionable at once or in response to a rapid change in data rate Increase w when Few questionable node concept is stable 2002/3/6 IDS Lab Seminar
Time and Space Complexity VFDT O(lvdvc) CVFDT O(lcdvc) O(ndvc) n: #nodes in CVFDT’s main tree and alternate trees d: #attributes v: max number of values per attribute c: #class lv,lc: longest path 2002/3/6 IDS Lab Seminar
Empirical Study Synthetic data Web data 2002/3/6 IDS Lab Seminar
Synthetic Data A hyperplane in d-dimensional space is the set of points x that satisfy where xi is the ith coordinate of x. positive: negative: 2002/3/6 IDS Lab Seminar
Synthetic Data (cont’d) Initialize weight to .2 except for w0 which is .25d Substitute its coordinates into the left hand side of Equation to obtain a sum s |s|<=.1*w0 positive |s|<=.2*w0 negative xi=[0,1] 2002/3/6 IDS Lab Seminar
Synthetic Data (cont’d) 5 million training examples δ=0.0001 f=20,000 Nmin=300;┬=0.05;w=100,000 2002/3/6 IDS Lab Seminar
Synthetic Data (cont’d) 2002/3/6 IDS Lab Seminar
Synthetic Data (cont’d) CVFDT took 4.3 times longer than VFDT VFDT’s average memory allocation over the course of the run was 23MB while CVFDT’s was 16.5MB The average number of nodes in VFDT’s tree was 2696 and in CVFDT’s tree was 677(132: alternate tree, 545: main tree) 2002/3/6 IDS Lab Seminar
Conclusion Introducing CVFDT, learning accurate models from the most demanding high-speed, concept-drifting data streams Maintain a decision-tree up-to-date with a windows of examples 2002/3/6 IDS Lab Seminar
Opinion Many techniques have the problem, concept drift. So, maybe we could apply the concept in this paper to our other techniques, like the association rule mining, clustering algorithms, and so on. 2002/3/6 IDS Lab Seminar