Download presentation
Presentation is loading. Please wait.
Published byScot Morgan Modified over 9 years ago
1
Active Learning for Probabilistic Models Lee Wee Sun Department of Computer Science National University of Singapore leews@comp.nus.edu.sg LARC-IMS Workshop
2
Probabilistic Models in Networked Environments Probabilistic graphical models are powerful tools in networked environments Example task: Given some labeled nodes, what are the labels of remaining nodes? May also need to learn parameters of model (later) LARC-IMS Workshop ? ? ? Labeling university web pages with CRF ?
3
Active Learning Given a budget of k queries, which nodes to query to maximize performance on remaining nodes? What are reasonable performance measures with provable guarantees for greedy methods? LARC-IMS Workshop ? ? ? Labeling university web pages with CRF ?
4
Entropy First consider non-adaptive policy Chain rule of entropy Maximizing entropy of selected variables (Y 1 ) minimizes the conditional entropy LARC-IMS Workshop ConstantMaximizeMinimize target
5
Greedy method – Given already selected set S, add variable Y i to maximize Near optimality: because of submodularity of entropy. LARC-IMS Workshop
6
Submodularity Diminishing return property LARC-IMS Workshop
7
Adaptive Policy What about adaptive policies? LARC-IMS Workshop Non-adaptiveAdaptive k
8
Let ρ be a path down the policy tree, and let the policy entropy be Then we can show where Y G is the graph labeling Correspond to chain rule in non-adaptive case – maximizing policy entropy minimizes conditional entropy LARC-IMS Workshop
9
Recap: Greedy algorithm is near-optimal for non- adaptive case For adaptive case, consider greedy algorithm that selects the variable with the largest entropy conditioned on the observations Unfortunately, for adaptive case, we can show that, for every α > 0, there is a probabilistic model such that LARC-IMS Workshop
10
Tsallis Entropy and Gibbs Error In statistical mechanics, Tsallis entropy is a generalization of Shannon entropy Shannon entropy is special case for q = 1. We call the case q = 2, Gibbs Error LARC-IMS Workshop
11
Properties of Gibbs Error Gibbs error is the expected error of the Gibbs classifier – Gibbs classifier: Draw a labeling from the distribution and use the labeling as the prediction At most twice Bayes (best possible) error. LARC-IMS Workshop
12
Lower bound to entropy – Maximizing policy Gibbs error, maximize lower bound to policy entropy LARC-IMS Workshop Gibbs Error
13
Policy Gibbs error LARC-IMS Workshop
14
Maximizing policy Gibbs error minimizes expected weighted posterior Gibbs error Make progress on either the version space or posterior Gibbs error LARC-IMS Workshop Version spacePosterior Gibbs error
15
Gibbs Error and Adaptive Policies Greedy algorithm: Select node i with the largest conditional Gibbs error Near-optimality holds for the case of policy Gibbs error (in contrast to policy entropy) LARC-IMS Workshop
16
Proof idea: – Show that policy Gibbs error is the same as the expected version space reduction. – Version space is the total probability of remaining labelings on unlabeled nodes (labelings that are consistent with labeled nodes) – Version space reduction function is adaptive submodular, giving required result for policy Gibbs error (using result of Golovin and Krause). LARC-IMS Workshop Version space
17
Adaptive Submodularity LARC-IMS Workshop x3x3 x3x3 ρ ρ’ Diminishing return property – Change in version space when x i is concatenated to path ρ and y is received – Adaptive submodular because
18
Worst Case Version Space Maximizing policy Gibbs error maximizes expected version space reduction Related greedy algorithm: Select the least confident variable – Select the variable with the smallest maximum label probability Approximately maximizes worst case version space reduction LARC-IMS Workshop
19
Let Using greedy strategy that selects least confident variable achieves because version space reduction function is pointwise submodular LARC-IMS Workshop
20
Pointwise Submodularity Let V(S,y) be the version space remaining if y is the true labeling of all nodes and subset S has been labeled 1-V(S,y) is pointwise submodular as it is submodular for every labeling y LARC-IMS Workshop
21
Summary So Far … Greedy AlgorithmCriteriaOptimalityProperty Select maximum entropy variable Entropy of selected variables No constant factor approximation Select maximum Gibbs error variable Policy Gibbs error (expected version space reduction) 1-1/eAdaptive submodular Select least confident variable Worst case version space reduction 1-1/ePointwise submodular … … LARC-IMS Workshop
22
Learning Parameters Take a Bayesian approach Put prior over parameters Integrate away parameters when computing probability of labeling Also works with commonly encountered pooled based active learning scenario (independent instances – no dependencies other than on parameter) LARC-IMS Workshop
23
Experiments Named entity recognition with Bayesian CRF on CoNLL 2003 dataset Greedy algs performance similar and better than passive learning (random) LARC-IMS Workshop
24
Weakness of Gibbs Error A labeling is considered incorrect if even one component does not agree LARC-IMS Workshop
25
Generalized Gibbs Error Generalize Gibbs error to use loss function L Example: Hamming loss, 1-F-score, etc. Reduces to Gibbs error when L(y,y’) = 1-δ(y,y’) where – δ(y,y’) = 1 when y = y’, and – δ(y,y’) = 0 otherwise LARC-IMS Workshop y2y2 y2y2 y1y1 y1y1 y3y3 y3y3 y4y4 y4y4
26
Generalized policy Gibbs error (to maximize) LARC-IMS Workshop Generalized Gibbs Error Remaining weighted Generalized Gibbs error (agrees with y on ρ)
27
Generalized policy Gibbs error is the average of Call this function the generalized version space reduction function Unfortunately, not adaptive submodular for arbitrary L. LARC-IMS Workshop y2y2 y2y2 y1y1 y1y1 y3y3 y3y3 y4y4 y4y4
28
However, generalized version space reduction function is pointwise submodular – Has good approximation in the worst case LARC-IMS Workshop y2y2 y2y2 y1y1 y1y1 y3y3 y3y3 y4y4 y4y4
29
Hedging against worst case labeling may be too conservative Can hedge against the total generalized version space among surviving labelings instead LARC-IMS Workshop y2y2 y2y2 y1y1 y1y1 y3y3 y3y3 y4y4 y4y4 y2y2 y2y2 y1y1 y1y1 y3y3 y3y3 y4y4 y4y4 instead of
30
Call this total generalized version space reduction function Total generalized version space reduction function is pointwise submodular – Has good approximation in the worst case LARC-IMS Workshop
31
Summary LARC-IMS Workshop Greedy AlgorithmCriteriaOptimalityProperty Select maximum entropy variable Entropy of selected variables No constant factor approximation Select maximum Gibbs error variable Policy Gibbs error (expected version space reduction) 1-1/eAdaptive submodular Select least confident variable Worst case version space reduction 1-1/ePointwise submodular Select variable that maximizes worst case generalized version space reduction Worst case generalized version space reduction 1-1/ePointwise submodular Select variable that maximizes worst case total generalized version space reduction Worst case total generalized version space reduction 1-1/ePointwise submodular
32
Experiments Text classification 20Newsgroup dataset Classify 7 pairs of newsgroups AUC for classification error Max Gibbs error vs Total Generalized Version Space with Hamming Loss LARC-IMS Workshop
33
Acknowledgements Joint work with – Nguyen Viet Cuong (NUS) – Ye Nan (NUS) – Adam Chai (DSO) – Chieu Hai Leong (DSO) LARC-IMS Workshop
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.