Download presentation
Presentation is loading. Please wait.
Published byElisha Follett Modified over 10 years ago
1
Classification, Regression and Other Learning Methods CS240B Presentation Peter Huang June 4, 2014
2
Outline Motivation Introduction to Data Streams and Concept Drift Survey of Ensemble Methods: Bagging: KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Weighted Bagging: KDD ’03: Mining Concept-Drifting Data Streams using Ensemble Classifiers Adaptive Boosting: KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams Summary Conclusion
3
Motivation Significant amount of research recently has focused on mining data streams Real-world applications include: financial data analysis, credit card fraud, network monitoring, sensor networks, and many others Algorithms for mining data streams have to overcome challenges not seen in traditional data mining, particularly performance and unending data sets Traditional algorithms must be made non-blocking, fast and light, and must adapt to data stream issues
4
Data Streams A Data Stream is a continuous stream of data items, in the form of tuples or vectors, that arrive at a high rate, and are subject to unknown changes such as concept drift or shift Algorithms that process data streams must be: Iterative – reading data sequentially Efficient – fast and light in computation/memory Single-pass – account for surplus of data Adaptive – account for concept drift Any-time – be able to provide best answer continuously
5
Data Stream Classification Various type of methods are used to classify data streams Single classifier Sliding window on recent data, fixed or variable Naive Bayes, C4.5, RIPPER Support vector, neural networks K-NN, linear regression Decision Trees BOAT algorithm VFDT, Hoeffding tree CVFDT Ensemble Methods Bagging Boosting Random Forest
6
Concept Drift Concept drift is an implicit property of data streams Concept may change or drift over time due to sudden or gradual changes of external environment Mining changes one of the core issues of data mining, useful in many real-world applications Two types of concept change: gradual and shift Methods to adapt to concept drift: Ensemble methods, majority or weight voting Exponential Forgetting, forgetting factor Replacement methods, create new classifier
7
Type of Concept Drift Two types of concept change: gradual and shift Shift: change in mean, class/distribution change Gradual: change in mean and variance, trends
8
Ensemble Classifiers Ensemble methods is one method of classification that naturally handles concept drift Combines the predictions of multiple base models, each learned using a base learner Known that combining multiple models consistently outperforms individual models Use either traditional averaging or weighted averaging to classify data stream items
9
Survey of Ensemble Methods Bagging: KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Weighted Bagging: KDD ’03: Mining Concept-Drifting Data Streams using Ensemble Classifiers Adaptive Boosting: KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams
10
KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Approach problem of large-scale or streaming classification by building committee or ensemble classifiers, each constructed on a subset of available data points Basically introduces the concept of ensemble classification Traditional scheme of averaging prediction used Later improved in KDD ’03, KDD ’04, and more
11
Ensemble of Classifiers Fixed ensemble size, up to around 20-25 New classifier replaces least quality classifier in existing ensemble Building blocks are decision trees constructed using C4.5 Operational parameter is whether to prune tree or not In experiments, pruning decreased overall accuracy because of over-fitting Adapts to concept drift by changing over time, follows Gaussian-like CDF gradual change
12
Streaming Ensemble Pseudocode while more data points are available read d points, create training set D build classifier C i using D evaluate C i-1 on D evaluate all classifiers in ensemble E on D if E not full insert C i-1 into E else if Quality(C i-1 ) > Quality(E j ) for some j replace E j with C i-1 Quality is measured by ability to classify points in current test set
13
Replacement of Existing Classifiers 788475807085 Existing Ensemble of ClassifiersNewly Trained Classifier 788475808568 Average Ensemble Quality: 77.4 80.4 Next Trained ClassifierNew Ensemble of Classifiers
14
Experimental Results: Adult Data
15
Experimental Results: SEER Data
16
Experimental Results: Web Data
17
Experimental Results: Concept Drift
18
Survey of Ensemble Methods Bagging: KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Weighted Bagging: KDD ’03: Mining Concept- Drifting Data Streams using Ensemble Classifiers Adaptive Boosting: KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams
19
KDD ’03: Mining Concept-Drifting Data Streams using Ensemble Classifiers General framework for mining concept-drifting data streams using ensemble of weighted classifiers Basically improves the concept of ensemble classification by adding weighted averaging instead of traditional averaging Weight is reversely proportional to classifiers expected error, or MSE, such that w i = MSR r – MSR i Eliminates the effect of examples representing outdated concepts by assigning lower weight
20
Ensemble of Classifiers Fixed ensemble size, top K classifiers kept New classifiers replaces less weighted classifiers in existing ensemble Building blocks are decision trees constructed using C4.5 Adapts to concept drift by removing and/or reducing weight of incorrect classifiers
21
Streaming Ensemble Pseudocode while more data points are available read d points, create training set S build classifier C’ from S compute error rate of C’ via cross-validation on S derive weight w’ for C’, w’ = MSE r – MSE i for each classifier C i in C: apply C i on S to derive MSE i compute weight w i C top K weight classifiers in C U {C’} return C Quality is measured by ability to classify points in current test set
22
Data Expiration Problem Identify in a timely manner those data in the training set that are no longer consistent with the current concepts Discards data after they become old, that is, after a fixed period of time T has passed since their arrival If T is large, the training set is likely to contain outdated concepts, which reduces classification accuracy If T is small, the training set may not have enough data, and as a result, the learned model will likely carry a large variance due to over-fitting.
23
Expiration Problem Illustrated
24
Replacement of Existing Classifiers 1215192110X Existing Stream of ClassifiersTrain Example 13 Ensemble of Classifiers Used Newer Classifiers on Right, Numbers Represents MSE Error New Classifier from Train 1215191310
25
Experimental Results: Average Error
26
Experimental Results: Error Rates
27
Experimental Results: Concept Drift
28
Survey of Ensemble Methods Bagging: KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Weighted Bagging: KDD ’03: Mining Concept-Drifting Data Streams using Ensemble Classifiers Adaptive Boosting: KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams
29
KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams Novel Adaptive Boosting Ensemble method to solve continuous mining of data stream problem Basically improves the concept of ensemble classification by boosting incorrectly classified samples Weight of incorrect samples is w i = (1 – e j )/e j Traditional scheme of averaging prediction used
30
Ensemble of Classifiers Fixed ensemble size, recent M classifiers kept Boosting of incorrect sample weight provide a number of formal guarantees on performance Building blocks are decision trees constructed using C4.5 Adapts to concept drift by change detection, starting ensemble from scratch
31
Streaming Ensemble Pseudocode E b = {C 1,…,C m }, B j = {(x 1,y 1 ),…,(x n,y n )} while more data points are available read n points, create training block B j compute ensemble prediction on each n point i change detection: E b {} if change detected if E b <> {}: compute error rate of E b on B j set new samples weight w i = (1 – e j )/e j else: w i = 1 learn new classifier C m+1 from B j update Eb C m+1, remove C 1 if m = M
32
Change Detection To detect change, check null hypothesis H0 and alternative hypothesis H1 Two-stage method: first check significant test, second check hypothesis test
33
Replacement of Existing Classifiers 858890878478 Existing Ensemble of ClassifiersNew Classifier 86 Boosted Ensemble Newer Classifiers on Right, Numbers Represents Accuracy Boosted Classifier 8890878486
34
Experimental Results: Concept Drift
35
Experimental Results: Comparison
36
Experimental Results: Time and Space
37
Summary Bagging: KDD ’01: A Streaming Ensemble Algorithm (SEA) for Large-Scale Classification Introduced bagging ensemble for data stream Weighted Bagging: KDD ’03: Mining Concept-Drifting Data Streams using Ensemble Classifiers Adds weighting to improve accuracy and handle drift Adaptive Boosting: KDD ’04: Fast and Light Boosting for Adaptive Mining of Data Streams Adds boosting to further improve accuracy and speed
38
Thank You Questions ?
39
Sources Adams, Niall M., et al. "Efficient Streaming Classification Methods." (2010). Street, W. Nick, and Yong Seog Kim. "A streaming ensemble algorithm (SEA) for large-scale classification." Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2001. Wang, Haixun, et al. "Mining concept-drifting data streams using ensemble classifiers." Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2003. Chu, Fang, and Carlo Zaniolo. "Fast and light boosting for adaptive mining of data streams." Advances in Knowledge Discovery and Data Mining. Springer Berlin Heidelberg, 2004. 282- 292.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.