Presentation is loading. Please wait.

Presentation is loading. Please wait.

Universal Learning over Related Distributions and Adaptive Graph Transduction Erheng Zhong †, Wei Fan ‡, Jing Peng*, Olivier Verscheure ‡, and Jiangtao.

Similar presentations


Presentation on theme: "Universal Learning over Related Distributions and Adaptive Graph Transduction Erheng Zhong †, Wei Fan ‡, Jing Peng*, Olivier Verscheure ‡, and Jiangtao."— Presentation transcript:

1 Universal Learning over Related Distributions and Adaptive Graph Transduction Erheng Zhong †, Wei Fan ‡, Jing Peng*, Olivier Verscheure ‡, and Jiangtao Ren † † Sun Yat-Sen University ‡ IBM T. J. Watson Research Center *Montclair State University 1.Go beyond transfer learning to sample selection bias and uncertainty mining 2.Unified framework 3.One single solution: supervised case

2 2 Standard Supervised Learning New York Times training (labeled) test (unlabeled) Classifier New York Times 85.5%

3 3 Sample Selection Bias New York Times training (labeled) test (unlabeled) Classifier New York Times 85.5% Have a different word vector distribution August: a lot about typhoon in Taiwan September: a lot about US Open 78.5%

4 Uncertainty Data Mining Training Data: –Both feature vectors and class labels contain noise (usually Gaussian) –Common for data collected from sensor network Testing data: –Feature vector contain noises

5 Summary Traditional supervised learning: –Training and testing data follow the identical distribution Transfer learning: –from different domains Sample selection bias: –from same domain but distribution is different such as, missing not at random Uncertain data mining: –data contains noise In other words: in all three cases, training and testing data are from different distributions. Traditionally, each problem is handled separately.

6 Main Challenge Could one solve these different but similar problems under a uniform framework? With the same solution? Universal Learning

7 is the subsets of X that are the support of some hypothesis in a fixed hypothesis space ([Blitzer et al, 2008] The distance between two distributions ([Blitzer et al, 2008]

8 How to Handle Universal Learning? Most traditional classifiers could not guarantee the performance when training and test distributions are different. Could we find one classifier under weeker assumption? Graph Transduction?

9 Advantage of Graph Transduction Weaker assumption that the decision boundary lies on the low density regions of the unlabeled data. Two-Gaussians vs. Two-arcs

10 Just Graph Transduction? Sample Selection: which samples? “Un-smooth label”(more examples in low density region) and “class imbalance” problems ([Wang et al, 2008]) may mislead the decision boundary to go through the high density regions. Bottom part closest red square More red square than blue square

11 Maximum Margin Graph Transduction In margin-terms, unlabeled data with low margin are likely misclassified! Bad sample Good sample

12 Main Flow Predict the labels of unlabeled data Lift the unlabeled data margin Maximize the unlabeled data margin

13 Properties Adaptive Graph Transduction can be bounded Training error in terms of approximating the ideal hypothesis Error of the ideal hypothesis Emprical distance between training and test distribution

14 Properties If one classifier has larger unlabeled data margin, it will make the training error smaller (recall last theorem) Average ensemble is likely to achieve larger margin

15 Experiment – Data Set Transfer Learning –Reuters: 21758 Reuters news articles –SyskillWebert: HTML source of web pages plus the ratings of a user on those web pages First fill up the “GAP”, then use knn classifier to do classification Reuters org org.subA org.subB place place.subA place.subB Target-Domain Source-Domain First fill up the “GAP”, then use knn classifier to do classification SyskillWebert Target-Domain Sheep Biomedical Bands- recording Source-Domain Goats

16 Experiment – Data Set Sample Selection Bias Correction –UCI data set: Ionosphere, Diabetes, Haberman, WDBC 1.Randomly select 50% of the features, and then sort the data set according to each selected features; 2.we attain top instances from every sorted list as training set; Feature 1 Feature 2 Uncertainty Mining –Kent Ridge Biomedical Repository: high dimensional, low sample size (HDLSS) Generate two different Gaussian Noises and add them into training and test set

17 Experiment -- Baseline methods Original graph transduction algorithm ([Zhu, 2005]) –Using the entire training data set –Variation: choosing a randomly selected sample whose size is equal to the one chosen by MarginGraph CDSC: transfer learning approach ([Ling et al, 2008]) –find a mapping space which optimizes over consistency measure between the out-domain supervision and in- domain intrinsic structure BRSD-BK/BRSD-DB: bias correction approach ([Ren et al, 2008]) –discover structure and re-balance using unlabeled data

18 Performance--Transfer Learning

19 Perform best on 5 of 6 data sets!

20

21 Performance--Sample Selection Bias Accuracy: Best on all 4 data sets! AUC: Best on 2 of 4 data sets.

22 Performance--Uncertainty Mining Accuracy: Best on all 4 data sets! AUC: Best on all 4 data sets!

23 Margin Analysis MarginBase is the base classifier of MarginGraph in each iteration. LowBase is a “minimal margin classifier” which selects samples for building a classifier with minimal unlabeled data margin. LowGraph is the averaging ensemble of LowBase.

24 Maximal margin is better than minimal margin Ensemble is better than any single classifiers

25 Conclusion Cover different formulations where the training and test set are drawn from related but different distributions. Flow –Step-1 Sample selection -- Select labeled data from different distribution which could maximize the unlabeled data margin –Step-2 Label Propagation -- Label the unlabeled data –Step-3 Ensemble -- Further lift the unlabeled data margin Code and data available from http://www.cs.columbia.edu/~wfan http://www.cs.columbia.edu/~wfan

26


Download ppt "Universal Learning over Related Distributions and Adaptive Graph Transduction Erheng Zhong †, Wei Fan ‡, Jing Peng*, Olivier Verscheure ‡, and Jiangtao."

Similar presentations


Ads by Google