The Byzantine Secretary Problem

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders Speaker: Shahar Dobzinski Based on joint works with Noam Nisan & Michael Schapira.
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
A Dependent LP-Rounding Approach for the k-Median Problem Moses Charikar 1 Shi Li 1 1 Department of Computer Science Princeton University ICALP 2012, Warwick,
Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Online Learning for Online Pricing Problems Maria Florina Balcan.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Online Mechanism Design (Randomized Rounding on the Fly)
Prior-free auctions of digital goods Elias Koutsoupias University of Oxford.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
A sublinear Time Approximation Scheme for Clustering in Metric Spaces Author: Piotr Indyk IEEE FOCS 1999.
1 Learning with continuous experts using Drifting Games work with Robert E. Schapire Princeton University work with Robert E. Schapire Princeton University.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Shakhar Smorodinsky Courant Institute (NYU) Joint Work with Noga Alon Conflict-Free Coloring of Shallow Discs.
6.853: Topics in Algorithmic Game Theory Fall 2011 Matt Weinberg Lecture 24.
Maria-Florina Balcan Approximation Algorithms and Online Mechanisms for Item Pricing Maria-Florina Balcan & Avrim Blum CMU, CSD.
Item Pricing for Revenue Maximization in Combinatorial Auctions Maria-Florina Balcan, Carnegie Mellon University Joint with Avrim Blum and Yishay Mansour.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Matroids, Secretary Problems, and Online Mechanisms Nicole Immorlica, Microsoft Research Joint work with Robert Kleinberg and Moshe Babaioff.
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
Computational aspects of stability in weighted voting games Edith Elkind (NTU, Singapore) Based on joint work with Leslie Ann Goldberg, Paul W. Goldberg,
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Asaf Cohen (joint work with Rami Atar) Department of Mathematics University of Michigan Financial Mathematics Seminar University of Michigan March 11,
How the Experts Algorithm Can Help Solve LPs Online Marco Molinaro TU Delft Anupam Gupta Carnegie Mellon University.
Yang Cai Oct 08, An overview of today’s class Basic LP Formulation for Multiple Bidders Succinct LP: Reduced Form of an Auction The Structure of.
Market Design and Analysis Lecture 5 Lecturer: Ning Chen ( 陈宁 )
Unlimited Supply Infinitely many identical items. Each bidder wants one item. –Corresponds to a situation were we have no marginal production cost. –Very.
All right reserved by Xuehua Shen 1 Optimal Aggregation Algorithms for Middleware Ronald Fagin, Amnon Lotem, Moni Naor (PODS01)
How the Experts Algorithm Can Help Solve LPs Online Marco Molinaro TU Delft Anupam Gupta Carnegie Mellon University.
Improved Competitive Ratios for Submodular Secretary Problems ? Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
A Unified Continuous Greedy Algorithm for Submodular Maximization Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
Non-Preemptive Buffer Management for Latency Sensitive Packets Moran Feldman Technion Seffi Naor Technion.
Matroids, Secretary Problems, and Online Mechanisms Nicole Immorlica, Microsoft Research Joint work with Robert Kleinberg and Moshe Babaioff.
1 Approximation Algorithms for Generalized Scheduling Problems Ravishankar Krishnaswamy Carnegie Mellon University joint work with Nikhil Bansal, Anupam.
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
The Message Passing Communication Model David Woodruff IBM Almaden.
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
New Characterizations in Turnstile Streams with Applications
Approximation Algorithms
Vitaly Feldman and Jan Vondrâk IBM Research - Almaden
Approximating the MST Weight in Sublinear Time
Moran Feldman The Open University of Israel
Understanding Generalization in Adaptive Data Analysis
Maximum Matching in the Online Batch-Arrival Model
Lecture 18: Uniformity Testing Monotonicity Testing
Combinatorial Optimization Under Uncertainty
The Price of information in combinatorial optimization
Chapter 5. Optimal Matchings
Combinatorial Prophet Inequalities
Framework for the Secretary Problem on the Intersection of Matroids
k-center Clustering under Perturbation Resilience
Rank Aggregation.
Prophet Inequalities A Crash Course
Dynamic and Online Algorithms for Set Cover
Coverage Approximation Algorithms
(22nd August, 2018) Sahil Singla
A new and improved algorithm for online bin packing
Lecture 11 Overview Self-Reducibility.
Lecture 11 Overview Self-Reducibility.
Chapter 11 Limitations of Algorithm Power
CSCI B609: “Foundations of Data Science”
Optimization Problems Online with Random Demands
the k-cut problem better approximate and exact algorithms
Bin Packing Michael T. Goodrich Some slides adapted from slides from
Submodular Maximization with Cardinality Constraints
Non-clairvoyant Precedence Constrained Scheduling
Presentation transcript:

The Byzantine Secretary Problem 30th Nov, 2018 Sahil singla Princeton University Joint work with Anupam gupta, Domagoj Bradac, And Goran Zuzic

Example: Diamond-Selling Optimal Stopping-Time Sell One Diamond: Multiple potential buyers Buyers Arrive and Make a Take-it-or-Leave-it Bid Decide Immediately and Irrevocably When to Accept the Bid? Goal: Maximize value of the accepted bid Cannot go back to a declined bid Similar Examples Selling ad-slots Finding a secretary/marriage partner

Tight approx factor is 1/𝑒 The Secretary Problem 𝑣 1 Problem n unknown adversarial values 𝑣 1 > 𝑣 2 >…> 𝑣 𝑛 IID arrival times t 𝑒 ~ Unif[0,1] Decide immediately and irrevocably Max probability of selecting 𝑣 1 Dynkin’s Algorithm Ignore elements 𝑒 with 𝑡 𝑒 <1/2 Otherwise, select if larger than all previous elements Pr Selecting v 1 ≥ Pr t 1 > 1 2 ⋅Pr t 2 < 1 2 = 1 4 𝑣 2 𝑣 3 𝑣 4 𝑡=0 𝑡=1 𝑡=1/2 Tight approx factor is 1/𝑒

Why not Adversarial Arrival? 𝑡=0 𝑡=1 Don’t know the “scale” of the problem Sequence of increasing elements, and then all zeroes Select at random ⟹ 1 𝑛 success probability Why Random order? Mathematically necessary and makes sense intuitively Weaker assumption than iid values from unknown distrib Related Work: Combinatorial problems, e.g, sum of top k, Matchings, Matroids and their intersections, facility location, max indep set on interval graphs Packing linear programs (large budget assumption) What if only some of the arrivals adversarial? (outlier arrivals)

Remark: Fine if all reds small A Byzantine World Byzantine General’s Problem: Lamport, Shostak, Pease Some arrivals adversarial: Red 𝑅 vs Green 𝐺 elements All 𝑛= 𝑅 +|𝐺| values are adversarial Red elements 𝑅 at adversarial times Green elements 𝐺 at t 𝑒 ~ Unif[0,1] Elements do not reveal their color Dynkin’s algorithm no longer works One large red in the beginning Handle outliers: Need robust algorithms What’s even the right benchmark? Largest in 𝑅∪𝐺? Largest in 𝐺? 𝑡=0 𝑡=1 Remark: Fine if all reds small

Value vs Probability Maximization Dynkin’s Algorithm Needs current time 𝑡. Does not know 𝑛 Only needs relative order (ordinal setting) Maximizes probability of selecting 𝑣 1 Can we do better if given values? Maximize ratio E[Alg] v 1 No! Don’t know the “scale” of the problem For any algo, there exist instances with E Alg v 1 ≤ 1 𝑒 +ϵ 𝑡=0 𝑡=1

OUTLINE Motivation: Secretary Problem in a Byzantine World Benchmark Value-Maximization Probability-Maximization Other Results and Open Problems

Benchmark: Get the Max Overall Max? All green 𝐺 values are zero Back to adversarial arrival on red elements 𝑅 At best probability 1 |𝑅| Max Green? Only one non-zero green Consider increasing sequence of red elements Don’t know which is green ⟹Θ( 1 |𝑅| ) probability Think Ordinal 𝑡=0 𝑡=1 Green What to do next?

Drop the Maximum! Get 2nd Max Green Can we get the 2nd max green? Maximize probability: “Victory” on selecting ≥ 𝑣 ⋆ Maximize value: Ratio of E[Alg] to 𝑣 ⋆ Comparing to 2nd max green helps? Unclear! Adversary cannot control relative order of max and 2nd max greens Why it makes sense? Applications where small gap between max and 2nd max Mathematically this is the “price” that we have to pay Similar benchmarks in other applications, e.g., digital goods auctions 𝑣 ⋆ :=𝑣(2nd max green)

Main results 𝑣 ⋆ :=𝑣(2nd max green) Thm [BGSZ’18]: For Value Maximization we get O lo g ∗ n 2 approximation. Remark: Performance guarantee independent of |𝐺|, e.g., even for 𝐺 =2 Thm [BGSZ’18]: For Probability Maximization we get O log n 2 approximation. Pr[selecting ≥ 𝑣 ⋆ ] Remark: This result is only existential as it uses the minimax principle

OUTLINE Motivation: Secretary Problem in a Byzantine World Benchmark: Drop the Maximum! Value-Maximization: 𝐎 𝐥𝐨𝐠 ∗ 𝐧 𝟐 approx Probability-Maximization: 𝐎 𝐥𝐨𝐠⁡𝐧 𝟐 approx Other Results and Open Problems

Approach: Done or Refine Scale 𝑣 ⋆ :=𝑣(2nd max green) Observ: Given 𝑣 ⋆ ∈[𝑎 , 𝑚⋅𝑎] at 𝑡=0 implies 2⋅log m approx. Define log m levels: 𝑎,2𝑎 , 2𝑎, 2 2 𝑎 , …, 2 log m−1 𝑎, 2 log m 𝑎 Guess ≈𝑣 ⋆ w.p. 1 log m , and select first element above Question: What if good estimate of 𝑣 ⋆ by 𝑡=1/2? Define Checkpoints 𝑇 𝑖 : Partition time into intervals Idea in any Interval: Either a simple algo works (e.g., Dynkin’s, Select a random element) Or keep refining estimate of 𝑣 ⋆ , and set a threshold in the final interval Only works for value-max setting 𝑡=0 𝑡=1 Done Refine How many Checkpoints?

One Checkpoint: 𝐎(𝐥𝐨𝐠⁡𝐧) approximation 𝑣 ⋆ :=𝑣(2nd max green) Case 1 (“small”): Every red before 𝑡= 1 2 is below 𝑣 ⋆ Dynkin’s algorithm works Case 2 (“large”): Exists a red before 𝑡= 1 2 above 𝑛⋅ 𝑣 ⋆ Select a random element: Correct w.p. 1 𝑛 Otherwise (“medium”): Exists a red before 𝑡= 1 2 with value 𝑣 0 ∈[ 𝑣 ⋆ , 𝑛⋅ 𝑣 ⋆ ] Gives a factor m=𝑛 approx to 𝑣 ⋆ Define log 𝑛 levels. Condition on 𝑣 ⋆ at t> 1 2 Guess ≈𝑣 ⋆ w.p. 1/ log 𝑛 , and select first above E Alg ≥ Pr Guess 𝑣 ⋆ ⋅Pr 𝑣 ⋆ at t> 1 2 ⋅ 𝑣 ⋆ 2 ≥ 𝑣 ⋆ 4⋅ log n Done 𝑡=0 𝑡=1 𝑡=1/2 Refine Run one of the three algorithms uniformly at random Q.E.D.

Two Checkpoints: 𝐎(𝐥𝐨𝐠𝐥𝐨𝐠⁡𝐧) approx 1 3 2 3 Last slide implies for 𝑡∈[0, 1 3 ] : Exists a red element with value 𝑣 0 ∈[ 𝑣 ⋆ , 𝑛⋅ 𝑣 ⋆ ] Case 1 (“small”): For 𝑡∈[ 1 3 , 2 3 ], all red below 𝑣 ⋆ Dynkin’s algorithm works Case 2 (“large”): For 𝑡∈[ 1 3 , 2 3 ] there is a red above ( log 𝑛 ⋅ 𝑣 ⋆ ) Random level guessing gets it w.p. 1 log 𝑛 works Otherwise (“medium”): Exists a red for t∈[ 1 3 , 2 3 ] with value 𝑣 1 ∈[ 𝑣 ⋆ , log 𝑛 ⋅ 𝑣 ⋆ ] Gives a factor m=log 𝑛 approx to 𝑣 ⋆ Define log log 𝑛 levels: 𝑣 1 log 𝑛 , 2⋅ 𝑣 1 log 𝑛 ,…, 𝑣 1 4 , 𝑣 1 2 , 𝑣 1 2 , 𝑣 1 𝑡=0 𝑡=1 Done Definition of “large” changes Refine Run one of the Θ(1) algorithms randomly Q.E.D.

Multiple Checkpoints: 𝐎 𝐥𝐨𝐠 ∗ 𝐧 𝟐 approx Use lo g ∗ n checkpoints 𝑇 𝑖 Algorithm Guess a random interval Run a simple algorithm after refining till now Proof Idea Case 1 (“small”): There is an interval with all small reds Dynkin’s algo Case 2 (“large”): There is an interval with “large” red Guess level & Random elem Otherwise (“medium”): Refine till the end to get better scale of 𝑣 ⋆ 𝑇 𝑖 𝑡=0 𝑡=1 Q.E.D.

OUTLINE Motivation: Secretary Problem in a Byzantine World Benchmark: Drop the Maximum! Value-Maximization: 𝐎 𝐥𝐨𝐠 ∗ 𝐧 𝟐 approx Probability-Maximization: 𝐎 𝐥𝐨𝐠⁡𝐧 𝟐 approx Other Results and Open Problems

How to Capture Scale What does “scale” for 2nd max mean? 𝑣 ⋆ :=𝑣(2nd max green) What does “scale” for 2nd max mean? There is no notion of values Define a confusion set 𝑆 𝑡 Elements that are candidates to be 𝑣 ⋆ at 𝑡 Idea: Condition on 𝑣 ⋆ arriving in the first interval ⟹ 𝑆 𝑡 always subset of first interval 𝑡=0 𝑡=1 𝑆 𝑡 Observ: Given 𝑆 𝑡 implies 𝑆 𝑡 /(1−𝑡) approx. Proof: Set a random element of 𝑆 𝑡 as threshold and let largest green arrive after 𝑡.

Application: beating Random guessing 𝑣 ⋆ :=𝑣(2nd max green) Lemma: There exists 𝑂(√𝑛) approximation. Case 1: Number of red elements above 𝑣 ⋆ is Ω(√𝑛) Select a random element Case 2: Consider top √𝑛 elems at 𝑡= 1 2 as confusion set 𝑆 𝑡 Condition on 𝑣 ⋆ before 𝑡= 1 2 Guess a random elem to be 𝑣 ⋆ & set as threshold: Correct w.p. 1 √𝑛 Pr Selecting≥ 𝑣 ⋆ ≥ Pr Guess of 𝑣 ⋆ correct ⋅Pr max green at 𝑡 > 1 2 ≥ 1 2√𝑛 𝑡=0 𝑡=1 𝑆 𝑡 Q.E.D. Thm: For Probability Max, there exists O log n 2 approx. How to Refine Confusion Set?

Approach Using The Minimax Principle WLOG, assume we know the correlated arrival distribution General distribution over permutations of elements Values (order) of all 𝑅∪𝐺 and arrival times of Reds 𝑅 Idea: As time 𝑡 increases, better idea of the permutation Approach: Done or Refine Scale Use log n checkpoints 𝑇 𝑖 Either some simple algorithm works Otherwise: Refine at each checkpoint by discarding half of 𝑆 𝑡 Finally, randomly guess 𝑣 ⋆ when 𝑆 =𝑂(1) For 𝑒∈ 𝑆 𝑡 , maintain 𝑝 𝑒 𝑡 =Pr[𝑒 is 𝑣 ⋆ ] 𝑡=0 𝑡=1 𝑇 𝑖 Done 𝑆 𝑇 𝑖 Refine

How to Refine Let 𝑐 𝑖 ∈ 𝑆 𝑇 𝑖 be the median of 𝑆 𝑇 𝑖 . ∑ 𝑝 𝑒 𝑇 𝑖+1 𝑡=0 𝑡=1 Let 𝑐 𝑖 ∈ 𝑆 𝑇 𝑖 be the median of 𝑆 𝑇 𝑖 . Case 1 (“large”): At 𝑇 𝑖+1 , confusion probability below 𝑐 𝑖 is ≈1 𝑆 𝑇 𝑖+1 = Elements in 𝑆 𝑇 𝑖 below 𝑐 𝑖 Case 2 (“small”): At 𝑇 𝑖+1 , confusion probability below 𝑐 𝑖 is < 1 log 𝑛 𝑆 𝑇 𝑖+1 = Elements in 𝑆 𝑇 𝑖 above 𝑐 𝑖 Otherwise (“medium”): At 𝑇 𝑖+1 , confusion probability below 𝑐 𝑖 is in ( 1 log 𝑛 , 1) Set 𝑐 𝑖 as a threshold and select first element above ≥ 1 log 𝑛 probability that 𝑣 ⋆ is below 𝑐 𝑖 . Moreover, exists a red elem above 𝑐 𝑖 . ∑ 𝑝 𝑒 𝑇 𝑖+1 𝑐 𝑖 Refine 𝑆 𝑇 𝑖 𝑆 𝑇 𝑖+1 Done

Wrapping Up: O log n 2 Approximation Use log n checkpoints 𝑇 𝑖 Algorithm Guess a random interval 𝑖 With prob 1 2 run Dynkin’s algo in ( 𝑇 𝑖−1 , 𝑇 𝑖 ) Else, set 𝑐 𝑖 as threshold after refining till now Proof Idea Done Case: We “correctly” guess 𝑖 w.p. 1 log 𝑛 and 𝑣 ⋆ is below 𝑐 𝑖 w.p. 1 log 𝑛 Refine Till End: We guess final checkpoint where 𝑆 =𝑂(1) and set a random element as threshold 𝑇 𝑖 𝑡=0 𝑡=1 𝑆 𝑡 Q.E.D. 21

OUTLINE Motivation: Secretary Problem in a Byzantine World Benchmark: Drop the Maximum! Value-Maximization: 𝐎 𝐥𝐨 𝐠 ∗ ⁡𝐧 𝟐 approx Probability-Maximization: 𝐎 𝐥𝐨𝐠⁡𝐧 𝟐 approx Other Results and Open Problems

How to Select Multiple Items? Value Maximization (Sum of values): Arrival contains Red R and Green G elems What is the benchmark? Maximum sum of values in G∖{ g max } Thm [BGSZ’18]: For uniform Matroids with k=Ω(log n) we get O(1) approximation. Thm [BGSZ’18]: For Partition Matroids we get O loglog n 2 approximation. Remark: We get max-green element in most parts. Observation: Easy to get O(log n) approx for general matroids.

summary Questions? Byzantine Secretary Model: Compare to 2nd max green O log ∗ n 2 approx for value maximization Done or Refine Scale O log n 2 approx for probability maximization Minimax Principle O(1) for uniform and O loglog n 2 for partition matroids Open Problems Super constant lower bound? What are the optimal approx factors? How to make probability maximization algo constructive? How to extend to general matroids and to general packing LPs? Questions?

Uniform Matroids Thm [BGSZ’18]: For k=Ω(log n) we get O(1) approximation.

Partition Matroids Thm [BGSZ’18]: For Partition matroids we get O loglog n 2 approx.