Download presentation
Presentation is loading. Please wait.
1
Improving the Graph Mincut Approach to Learning from Labeled and Unlabeled Examples Avrim Blum, John Lafferty, Raja Reddy, Mugizi Rwebangira Carnegie Mellon
2
Outline Often have little labeled data but lots of unlabeled data Graph mincuts: One approach based on a belief that most ‘close’ examples have same classification Problem: -Does not say where it is most confident Our approach: Add noise to edges to extract confidence scores
3
Learning using Graph Mincuts: Blum and Chawla (2001)
4
Construct a Graph
5
Add sink and source -+
6
Obtain mincut Mincut -+
7
Classification +- Mincut
8
Goal To obtain a measure of confidence on each classification Our approach Add random uniform noise to the edges Run min cut several times For each unlabeled example take majority vote
9
Experiments Digits data set (each digit is a 16 X 16 binary array) 100 labeled examples 3900 unlabeled examples 100 runs of mincut
10
Results
11
Conclusions 3% error on 80% of the data Standard mincut only gives us 6% error on all the data Future Work Conduct further experiments on other data sets Explore from a theoretical perspective the properties of the distribution we get by selecting minimum cuts in this way
12
References Avrim Blum, Shuchi Chawla. Learning from Labeled and Unlabeled Data using Graph Mincuts, ICML 2001. Jon Kleinberg, Eva Tardos. Approximation Algorithms for Classification Problems with Pairwise Relationships: Metric Labeling and Markov Random Fields, FOCS 1999.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.