Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tuning bandit algorithms in stochastic environments The 18th International Conference on Algorithmic Learning Theory October 3, 2007, Sendai International.

Similar presentations


Presentation on theme: "Tuning bandit algorithms in stochastic environments The 18th International Conference on Algorithmic Learning Theory October 3, 2007, Sendai International."— Presentation transcript:

1 Tuning bandit algorithms in stochastic environments The 18th International Conference on Algorithmic Learning Theory October 3, 2007, Sendai International Center, Sendai, Japan TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A AA A A A A A A Jean-Yves Audibert, CERTIS - Ecole des Ponts Remi Munos, INRIA Futurs Lille Csaba Szepesvári, University of Alberta

2 Contents  Bandit problems  UCB and Motivation  Tuning UCB by using variance estimates  Concentration of the regret  Finite horizon – finite regret (PAC-UCB)  Conclusions

3 Exploration vs. Exploitation  Two treatments  Unknown success probabilities  Goal: find the best treatment while losing the smallest number of patients  Explore or exploit?

4 Playing Bandits PPayoff is 0 or 1 AArm 1: X 11, X 12, X 13, X 14, X 15, X 16, X 17, … AArm 2: X 21, X 22, X 23, X 24, X 25, X 26, X 27, … 0 110 10 111 0

5 Exploration vs. Exploitation: Some Applications  Simple processes: Clinical trials Job shop scheduling (random jobs) What ad to put on a web-page  More complex processes (memory): Optimizing production Controlling an inventory Optimal investment Poker

6 Bandit Problems – “Optimism in the Face of Uncertainty”  Introduced by Lai and Robbins (1985) (?)  i.i.d. payoffs X 11,X 12,…,X 1t,… X 21,X 22,…,X 2t,…  Principle: Inflated value of an option = maximum expected reward that looks “quite” possible given the observations so far Select the option with best inflated value

7 Some definitions  Payoff is 0 or 1  Arm 1: X 11, X 12, X 13, X 14, X 15, X 16, X 17, …  Arm 2: X 21, X 22, X 23, X 24, X 25, X 26, X 27, … 0 110 10 111 0 Now: t=11 T 1 (t-1) = 4 T 2 (t-1) = 6 I 1 = 1, I 2 = 2, …

8 Parametric Bandits [Lai&Robbins]  X it » p i, i ( ¢ ),  i unknown, t=1,2,…  Uncertainty set: “Reasonable values of  given the experience so far” U i,t ={| p i, (X i,1:T i (t) ) is “large” mod (t,T i (t)) }  Inflated values: Z i,t =max{ E  |  2 U i,t }  Rule: I t = arg max i Z i,t

9 Bounds  Upper bound:  Lower bound: If an algorithm is uniformly good then..

10 UCB1 Algorithm (Auer et al., 2002)  Algorithm: UCB1(b) 1. Try all options once 2. Use option k with the highest index:  Regret bound: R n : Expected loss due to not selecting the best option at time step n. Then:

11 Problem #1 When b 2 À  2, regret should scale with  2 and not b 2 !

12 UCB1-NORMAL  Algorithm: UCB1-NORMAL 1. Try all options once 2. Use option k with the highest index:  Regret bound:

13 Problem #1  The regret of UCB1(b) scales with O(b 2 )  The regret of UCB1-NORMAL scales with O( 2 ) … but UCB1-NORMAL assumes normally distributed payoffs  UCB-Tuned(b): Good experimental results No theoretical guarantees

14 UCB-V  Algorithm: UCB-V(b) 1. Try all options once 2. Use option k with the highest index:  Regret bound:

15 Proof  The “missing bound” (hunch.net):  Bounding the sampling times of suboptimal arms (new bound)

16 Can we decrease exploration?  Algorithm: UCB-V(b,,c) 1. Try all options once 2. Use option k with the highest index:  Theorem: When <1, the regret will be polynomial for some bandit problems When c<1/6, the regret will be polynomial for some bandit problems

17 Concentration bounds  Averages concentrate:  Does the regret of UCB* concentrate? RISK??

18 Logarithmic regret implies high risk  Theorem: Consider the pseudo-regret R n =  k=1 K T k (n)  k. Then for any >1 and z> log(n), P(R n >z) · C z - (Gaussian tail:P(R n >z) · C exp(-z 2 ))  Illustration: Two arms;  2 =  2 - 1 >0. Modes of law of R n at O(log(n)), O( 2 n)! Only happens when the support of the second best arm’s distribution overlaps with that of the optimal arm

19 Finite horizon: PAC-UCB  Algorithm: PAC-UCB(N) 1. Try all options ones 2. Use option k with the highest index:  Theorem: At time N with probability 1-1/N, suboptimal plays are bounded by O(log(K N)). Good when N is known beforehand

20 Conclusions  Taking into account the variance lessens dependence on the a priori bound b  Low expected regret => high risk  PAC-UCB: Finite regret, known horizon, exponential concentration of the regret  Optimal balance? Other algorithms?  Greater generality: look up the paper!

21 Thank you! Questions?

22 References  Optimism in the face of uncertainty: Lai, T. L. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22.  UCB1 and more: Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256.  Audibert, J., Munos, R., and Szepesvári, Cs. (2007). Tuning bandit algorithms in stochastic environments, ALT-2007.


Download ppt "Tuning bandit algorithms in stochastic environments The 18th International Conference on Algorithmic Learning Theory October 3, 2007, Sendai International."

Similar presentations


Ads by Google