Multi-armed Bandit Problems with Dependent Arms

Slides:



Advertisements
Similar presentations
Cooperative Transmit Power Estimation under Wireless Fading Murtaza Zafer (IBM US), Bongjun Ko (IBM US), Ivan W. Ho (Imperial College, UK) and Chatschik.
Advertisements

Efficient Sequential Decision-Making in Structured Problems Adam Tauman Kalai Georgia Institute of Technology Weizmann Institute Toyota Technological Institute.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Planning under Uncertainty
Bandits for Taxonomies: A Model-based Approach Sandeep Pandey Deepak Agarwal Deepayan Chakrabarti Vanja Josifovski.
Mortal Multi-Armed Bandits Deepayan Chakrabarti,Yahoo! Research Ravi Kumar,Yahoo! Research Filip Radlinski, Microsoft Research Eli Upfal,Brown University.
Apprenticeship Learning by Inverse Reinforcement Learning Pieter Abbeel Andrew Y. Ng Stanford University.
1 An Asymptotically Optimal Algorithm for the Max k-Armed Bandit Problem Matthew Streeter & Stephen Smith Carnegie Mellon University NESCAI, April
Cluster Analysis.  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods.
Exploration and Exploitation Strategies for the K-armed Bandit Problem by Alexander L. Strehl.
Multi-armed Bandit Problems with Dependent Arms
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
1 Matching DOM Trees to Search Logs for Accurate Webpage Clustering Deepayan Chakrabarti Rupesh Mehta.
Handling Advertisements of Unknown Quality in Search Advertising Sandeep Pandey Christopher Olston (CMU and Yahoo! Research)
1 Monte-Carlo Planning: Policy Improvement Alan Fern.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Random Sampling, Point Estimation and Maximum Likelihood.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
1 Estimation From Sample Data Chapter 08. Chapter 8 - Learning Objectives Explain the difference between a point and an interval estimate. Construct and.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
1 Monte-Carlo Planning: Policy Improvement Alan Fern.
1 Monte-Carlo Planning: Policy Improvement Alan Fern.
Javad Azimi, Ali Jalali, Xiaoli Fern Oregon State University University of Texas at Austin In NIPS 2011, Workshop in Bayesian optimization, experimental.
Distributed Learning for Multi-Channel Selection in Wireless Network Monitoring — Yuan Xue, Pan Zhou, Tao Jiang, Shiwen Mao and Xiaolei Huang.
Pairwise comparisons: Confidence intervals Multiple comparisons Marina Bogomolov and Gili Baumer.
Week 101 Test on Pairs of Means – Case I Suppose are iid independent of that are iid. Further, suppose that n 1 and n 2 are large or that are known. We.
Figure 5: Change in Blackjack Posterior Distributions over Time.
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Measures of Dispersion
ECO 173 Chapter 10: Introduction to Estimation Lecture 5a
Data Mining: Concepts and Techniques
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Stat 31, Section 1, Last Time Sampling Distributions
Making complex decisions
Monte-Carlo Planning:
Basic Practice of Statistics - 5th Edition
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Markov Decision Processes
Chapter 5 Sampling Distributions
Sampling Fundamentals 2
Chapter 5 Sampling Distributions
Estimating with PROBE II
ECO 173 Chapter 10: Introduction to Estimation Lecture 5a
Random walk initialization for training very deep feedforward networks
CS 4/527: Artificial Intelligence
Bandits for Taxonomies: A Model-based Approach
Optimal Electricity Supply Bidding by Markov Decision Process
Jianping Fan Dept of CS UNC-Charlotte
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Chapter 5 Sampling Distributions
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CASE − Cognitive Agents for Social Environments
10701 / Machine Learning Today: - Cross validation,
COP pass/fail decisions
Markov Decision Problems
Understanding Statistical Inferences
CHAPTER 2: Basic Summary Statistics
Chapter 5 Sampling Distributions
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Reinforcement Learning (2)
Markov Decision Processes
Applied Statistics and Probability for Engineers
MGS 3100 Business Analysis Regression Feb 18, 2016
Markov Decision Processes
Presentation transcript:

Multi-armed Bandit Problems with Dependent Arms Sandeep Pandey (spandey@cs.cmu.edu) Deepayan Chakrabarti (deepay@yahoo-inc.com) Deepak Agarwal (dagarwal@yahoo-inc.com)

Background: Bandits Bandit “arms” μ1 μ2 μ3 (unknown reward probabilities) Pull arms sequentially so as to maximize the total expected reward Show ads on a webpage to maximize clicks Product recommendation to maximize sales

“Skiing, snowboarding” Dependent Arms Reward probabilities μi are generally assumed to be independent of each other What if they are dependent? E.g., ads on similar topics, using similar text/phrases, should have similar rewards “Skiing, snowboarding” “Skiing, snowshoes” “Snowshoe rental” “Get Vonage!” μ1=0.3 μ2=0.28 μ2=0.31 μ3=10-6

Dependent Arms Reward probabilities μi are generally assumed to be independent of each other What if they are dependent? E.g., ads on similar topics, using similar text/phrases, should have similar rewards A click on one ad  other “similar” ads may generate clicks as well Can we increase total reward using this dependency?

Cluster Model of Dependence Arm 1 Arm 2 Arm 3 Arm 4 Cluster 1 Cluster 2 # pulls of arm i Successes si ~ Bin(ni, μi) μi ~ f(π[i]) No dependence across clusters Some distribution (known) Cluster-specific parameter (unknown)

Cluster Model of Dependence Arm 1 Arm 2 Arm 3 Arm 4 μi ~ f(π1) μi ~ f(π2) Total reward: Discounted: ∑ αt.E[R(t)], α = discounting factor Undiscounted: ∑ E[R(t)] t=0 ∞ T

Discounted Reward Arm 2 x’1 x’2 The optimal policy can be computed using per-cluster MDPs only. MDP for cluster 1 Pull Arm 1 x1 x2 x”1 x”2 Optimal Policy: Compute an (“index”, arm) pair for each cluster Pick the cluster with the largest index, and pull the corresponding arm Arm 4 x’3 x’4 MDP for cluster 2 Pull Arm 3 x3 x4 x”3 x”4

Discounted Reward Arm 2 x’1 x’2 The optimal policy can be computed using per-cluster MDPs only. MDP for cluster 1 Pull Arm 1 Reduces the problem to smaller state spaces Reduces to Gittins’ Theorem [1979] for independent bandits Approximation bounds on the index for k-step lookahead x1 x2 x”1 x”2 Optimal Policy: Compute an (“index”, arm) pair for each cluster Pick the cluster with the largest index, and pull the corresponding arm Arm 4 x’3 x’4 MDP for cluster 2 Pull Arm 3 x3 x4 x”3 x”4

Cluster Model of Dependence Arm 1 Arm 2 Arm 3 Arm 4 μi ~ f(π1) μi ~ f(π2) Total reward: Discounted: ∑ αt.E[R(t)], α = discounting factor Undiscounted: ∑ E[R(t)] ∞ t=0 T t=0

Undiscounted Reward “Cluster arm” 1 “Cluster arm” 2 Arm 1 Arm 2 Arm 3 Arm 4 All arms in a cluster are similar  They can be grouped into one hypothetical “cluster arm”

Undiscounted Reward Two-Level Policy In each iteration: Pick “cluster arm” using a traditional bandit policy Pick an arm within that cluster using a traditional bandit policy “Cluster arm” 1 “Cluster arm” 2 Each “cluster arm” must have some estimated reward probability Arm 1 Arm 2 Arm 3 Arm 4

Issues What is the reward probability of a “cluster arm”? How do cluster characteristics affect performance?

Reward probability of a “cluster arm” What is the reward probability r of a “cluster arm”? MEAN: r = ∑si / ∑ni, i.e., average success rate, summing over all arms in the cluster [Kocsis+/2006, Pandey+/2007] Initially, r = μavg = average μ of arms in cluster Finally, r = μmax = max μ among arms in cluster “Drift” in the reward probability of the “cluster arm”

Reward probability drift causes problems Best (optimal) arm, with reward probability μopt Arm 1 Arm 2 Arm 3 Arm 4 Cluster 1 Cluster 2 (opt cluster) Drift  Non-optimal clusters might temporarily look better  optimal arm is explored only O(log T) times

Reward probability of a “cluster arm” What is the reward probability r of a “cluster arm”? MEAN: r = ∑si / ∑ni MAX: r = max( E[μi] ) PMAX: r = E[ max(μi) ] Both MAX and PMAX aim to estimate μmax and thus reduce drift for all arms i in cluster

Reward probability of a “cluster arm” Bias in estimation of μmax MEAN: r = ∑si / ∑ni MAX: r = max( E[μi] ) PMAX: r = E[ max(μi) ] Both MAX and PMAX aim to estimate μmax and thus reduce drift Variance of estimator High Unbiased Low High

Comparison of schemes 10 clusters, 11.3 arms/cluster MAX performs best

Issues What is the reward probability of a “cluster arm”? How do cluster characteristics affect performance?

Effects of cluster characteristics We analytically study the effects of cluster characteristics on the “crossover-time” Crossover-time: Time when the expected reward probability of the optimal cluster becomes highest among all “cluster arms”

Effects of cluster characteristics Crossover-time Tc for MEAN depends on: Cluster separation Δ = μopt – μmax outside opt cluster Δ increases  Tc decreases Cluster size Aopt Aopt increases  Tc increases Cohesiveness in opt cluster 1-avg(μopt – μi) Cohesiveness increases  Tc decreases

Experiments (effect of separation) Circle mean, triangle max, square indep. Δ increases  Tc decreases  higher reward

Experiments (effect of size) Aopt increases  Tc increases  lower reward

Experiments (effect of cohesiveness) Cohesiveness increases  Tc decreases  higher reward

Related Work Typical multi-armed bandit problems Do not consider dependencies Very few arms Bandits with side information Cannot handle dependencies among arms Active learning Emphasis on #examples required to achieve a given prediction accuracy Citations!

Conclusions We analyze bandits where dependencies are encapsulated within clusters Discounted Reward  the optimal policy is an index scheme on the clusters Undiscounted Reward Two-level Policy with MEAN, MAX, and PMAX Analysis of the effect of cluster characteristics on performance, for MEAN

Discounted Reward 1 2 3 4 x”1 x”2 x’1 x’2 x3 x4 Pull Arm 1 success failure Change of belief for both arms 1 and 2 x1 x2 Estimated reward probabilities x3 x4 Create a belief-state MDP Each state contains the estimated reward probabilities for all arms Solve for optimal

Background: Bandits Regret = optimal payoff – actual payoff Bandit “arms” p1 p2 p3 (unknown payoff probabilities) Regret = optimal payoff – actual payoff

Reward probability of a “cluster arm” What is the reward probability of a “cluster arm”? Eventually, every “cluster arm” must converge to the most rewarding arm μmax within that cluster since a bandit policy is used within each cluster However, “drift” causes problems

Experiments Simulation based on one week’s worth of data from a large-scale ad-matching application 10 clusters, with 11.3 arms/cluster on average

Comparison of schemes MAX performs best 10 clusters, 11.3 arms/cluster Cluster separation Δ = 0.08 Cluster size Aopt = 31 Cohesiveness = 0.75 MAX performs best

Reward probability drift causes problems Best (optimal) arm, with reward probability μopt Arm 1 Arm 2 Arm 3 Arm 4 Cluster 1 Cluster 2 (opt cluster) Intuitively, to reduce regret, we must: Quickly converge to the optimal “cluster arm” and then to the best arm within that cluster