Presentation is loading. Please wait.

Presentation is loading. Please wait.

Maria-Florina Balcan Active Learning Maria Florina Balcan Lecture 26th.

Similar presentations


Presentation on theme: "Maria-Florina Balcan Active Learning Maria Florina Balcan Lecture 26th."— Presentation transcript:

1 Maria-Florina Balcan Active Learning Maria Florina Balcan Lecture 26th

2 Active Learning A Label for that Example Request for the Label of an Example A Label for that Example Request for the Label of an Example Data Source Unlabeled examples... Algorithm outputs a classifier Learning Algorithm Expert / Oracle The learner can choose specific examples to be labeled. He works harder, to use fewer labeled examples.

3 Maria-Florina Balcan Choose the label requests carefully, to get informative labels. What Makes a Good Algorithm? Guaranteed to output a relatively good classifier for most learning problems. Doesn’t make too many label requests.

4 Maria-Florina Balcan Can It Really Do Better Than Passive? YES! (sometimes) We often need far fewer labels for active learning than for passive. This is predicted by theory and has been observed in practice.

5 Maria-Florina Balcan When Does it Work? And Why? The algorithms currently used in practice are not well understood theoretically. We don’t know if/when they output a good classifier, nor can we say how many labels they will need. So we seek algorithms that we can understand and state formal guarantees for. Rest of this talk: surveys recent theoretical results.

6 Maria-Florina Balcan Active Learning The learner has the ability to choose specific examples to be labeled: - The learner works harder, in order to use fewer labeled examples. We get to see unlabeled data first, and there is a charge for every label. How many labels can we save by querying adaptively?

7 Can adaptive querying help? [CAL92, Dasgupta04] Threshold fns on the real line: w +- Exponential improvement. h w (x) = 1(x ¸ w), C = {h w : w 2 R} Sample with 1/  unlabeled examples; do binary search. - Binary search – need just O(log 1/  ) labels. Active: only O(log 1/  ) labels. Passive supervised:  (1/  ) labels to find an  -accurate threshold. + - Active Algorithm Other interesting results as well.

8 Maria-Florina Balcan Active Learning might not help [Dasgupta04] C = {linear separators in R 1 }: active learning reduces sample complexity substantially. In this case: learning to accuracy  requires 1/  labels… In general, number of queries needed depends on C and also on D. C = {linear separators in R 2 }: there are some target hyp. for which no improvement can be achieved! - no matter how benign the input distr. h1h1 h2h2 h3h3 h0h0

9 Maria-Florina Balcan Examples where Active Learning helps C = {linear separators in R 1 }: active learning reduces sample complexity substantially no matter what is the input distribution. In general, number of queries needed depends on C and also on D. C - homogeneous linear separators in R d, D - uniform distribution over unit sphere: need only d log 1/  labels to find a hypothesis with error rate < . Freund et al., ’97. Dasgupta, Kalai, Monteleoni, COLT 2005 Balcan-Broder-Zhang, COLT 07

10 Maria-Florina Balcan Region of uncertainty [CAL92] current version space Example: data lies on circle in R 2 and hypotheses are homogeneous linear separators. region of uncertainty in data space Current version space: part of C consistent with labels so far. “Region of uncertainty” = part of data space about which there is still some uncertainty (i.e. disagreement within version space) + +

11 Maria-Florina Balcan Region of uncertainty [CAL92] Pick a few points at random from the current region of uncertainty and query their labels. current version space region of uncertainy Algorithm:

12 Maria-Florina Balcan Region of uncertainty [CAL92] Current version space: part of C consistent with labels so far. “Region of uncertainty” = part of data space about which there is still some uncertainty (i.e. disagreement within version space) current version space region of uncertainty in data space + +

13 Maria-Florina Balcan Region of uncertainty [CAL92] Current version space: part of C consistent with labels so far. “Region of uncertainty” = part of data space about which there is still some uncertainty (i.e. disagreement within version space) new version space New region of uncertainty in data space + +

14 Maria-Florina Balcan Region of uncertainty [CAL92], Guarantees Algorithm: Pick a few points at random from the current region of uncertainty and query their labels. C - homogeneous linear separators in R d, D -uniform distribution over unit sphere. low noise, need only d 2 log 1/  labels to find a hypothesis with error rate < . realizable case, d 3/2 log 1/  labels. supervised -- d/  labels. [Balcan, Beygelzimer, Langford, ICML’06] Analyze a version of this alg. which is robust to noise. C- linear separators on the line, low noise, exponential improvement.

15 Maria-Florina Balcan Margin Based Active-Learning Algorithm Use O(d) examples to find w 1 of error 1/8. iterate k=2, …, log(1/  ) rejection sample m k samples x from D satisfying |w k-1 T ¢ x| ·  k ; label them; find w k 2 B(w k-1, 1/2 k ) consistent with all these examples. end iterate [Balcan-Broder-Zhang, COLT 07] wkwk w k+1 γkγk w*w*

16 Maria-Florina Balcan Margin Based Active-Learning, Realizable Case u v  (u,v) Theorem Ifthen after w s has error · . P X is uniform over S d. and iterations Fact 1 Fact 2 v 

17 Maria-Florina Balcan Margin Based Active-Learning, Realizable Case u v  (u,v) If and Theorem Ifthen after w s has error · . P X is uniform over S d. and iterations Fact 1 Fact 3 v u v 

18 Maria-Florina Balcan Margin Based Active-Learning [BBZ’07] region of uncertainty in data space WkWk

19 Maria-Florina Balcan BBZ’07, Proof Idea iterate k=2, …, log(1/  ) Rejection sample m k samples x from D satisfying |w k-1 T ¢ x| ·  k ; ask for labels and find w k 2 B(w k-1, 1/2 k ) consistent with all these examples. end iterate Assume w k has error · . We are done if 9  k s.t. w k+1 has error ·  /2 and only need O( d log( 1/  )) labels in round k. wkwk w k+1 γkγk w*w*

20 Maria-Florina Balcan BBZ’07, Proof Idea iterate k=2, …, log(1/  ) Rejection sample m k samples x from D satisfying |w k-1 T ¢ x| ·  k ; ask for labels and find w k 2 B(w k-1, 1/2 k ) consistent with all these examples. end iterate Assume w k has error · . We are done if 9  k s.t. w k+1 has error ·  /2 and only need O( d log( 1/  )) labels in round k. wkwk w k+1 γkγk w*w*

21 Maria-Florina Balcan BBZ’07, Proof Idea iterate k=2, …, log(1/  ) Rejection sample m k samples x from D satisfying |w k-1 T ¢ x| ·  k ; ask for labels and find w k 2 B(w k-1, 1/2 k ) consistent with all these examples. end iterate Assume w k has error · . We are done if 9  k s.t. w k+1 has error ·  /2 and only need O( d log( 1/  )) labels in round k. Key Point Under the uniform distr. assumption for we have ·  /4 wkwk w k+1 γkγk w*w*

22 Maria-Florina Balcan BBZ’07, Proof Idea Key Point Under the uniform distr. assumption for we have ·  /4 So, it’s enough to ensure that We can do so by only using O(d log( 1/  )) labels in round k. wkwk w k+1 γkγk w*w* Key Point


Download ppt "Maria-Florina Balcan Active Learning Maria Florina Balcan Lecture 26th."

Similar presentations


Ads by Google