Download presentation
Presentation is loading. Please wait.
1
The Value of Knowing a Demand Curve: Regret Bounds for Online Posted-Price Auctions Bobby Kleinberg and Tom Leighton
2
Introduction How do we measure the value of knowing the demand curve for a good?
3
Introduction How do we measure the value of knowing the demand curve for a good? Mathematical formulation: What is the difference in expected revenue between an informed seller who knows the demand curve, and an uninformed seller using an adaptive pricing strategy … assuming both pursue the optimal strategy.
4
Online Posted-Price Auctions 1 seller, n buyers, each wants one item. Buyers interact with seller one at a time. Transaction: Seller posts price.
5
Online Posted-Price Auctions 1 seller, n buyers, each wants one item. Buyers interact with seller one at a time. Transaction: Seller posts price. Buyer arrives. 6¢6¢
6
Online Posted-Price Auctions 1 seller, n buyers, each wants one item. Buyers interact with seller one at a time. Transaction: Seller posts price. Buyer arrives. Buyer gives YES/NO response. 6¢6¢ YES
7
Online Posted-Price Auctions 1 seller, n buyers, each wants one item. Buyers interact with seller one at a time. Transaction: Seller posts price. Buyer arrives. Buyer gives YES/NO response. Seller may update price YES 10¢ after each transaction.
8
Online Posted-Price Auctions A natural transaction model for many forms of commerce, including web commerce. (Our motivation came from ticketmaster.com.) 10¢
9
Online Posted-Price Auctions A natural transaction model for many forms of commerce, including web commerce. (Our motivation came from ticketmaster.com.) Clearly strategyproof, since agents’ strategic behavior is limited to their YES/NO responses. 10¢
10
Informed vs. Uninformed Sellers Uninformed Informed
11
Informed vs. Uninformed Sellers InformedUninformed ValueAskRevenueAskRevenue.8
12
Informed vs. Uninformed Sellers InformedUninformed ValueAskRevenueAskRevenue.5.8
13
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8 InformedUninformed
14
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8.75.8 InformedUninformed
15
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8.7.750.80 InformedUninformed
16
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8.7.750.80.6.8 InformedUninformed
17
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8.7.750.80.6.8 InformedUninformed
18
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.8.7.750.80.6.8 1.11.6Ex ante regret = 0.5 InformedUninformed
19
Informed vs. Uninformed Sellers ValueAskRevenueAskRevenue.9.5.7.750.7.8.6.7 1.12.1Ex post regret = 1.0 InformedUninformed
20
Definition of Regret Regret = difference in expected revenue between informed and uninformed seller. Ex ante regret corresponds to asking, “What is the value of knowing the demand curve?” Competitive ratio was already considered by Blum, Kumar, et al (SODA’03). They exhibited a (1+ε)-competitive pricing strategy under a mild hypothesis on the informed seller’s revenue.
21
3 Problem Variants Identical valuations: All buyers have same threshold price v, which is unknown to seller. Random valuations: Buyers are independent samples from a fixed probability distribution (demand curve) which is unknown to seller. Worst-case valuations: Make no assumptions about buyers’ valuations, they may be chosen by an oblivious adversary. Always assume prices are between 0 and 1.
22
Regret Bounds for the Three Cases Valuation ModelLower BoundUpper Bound IdenticalΩ(log log n)O(log log n) RandomΩ(n 1/2 )O((n log n) 1/2 ) Worst-CaseΩ(n 2/3 )O(n 2/3 (log n) 1/3 ) Ex postEx ante
23
Identical Valuations Exponentially better than binary search!! Equivalent to a question considered by Karp, Koutsoupias, Papadimitriou, Shenker in the context of congestion control. (KKPS, FOCS 2000). Our lower bound settles two of their open questions. Valuation ModelLower BoundUpper Bound IdenticalΩ(log log n)O(log log n)
24
Random Valuations x D(x) 0 1 1 Demand curve: D(x) = Pr(accepting price x)
25
Best “Informed” Strategy x D(x) 0 1 1 Expected revenue at price x: f(x) = xD(x).
26
Best “Informed” Strategy x D(x) 0 1 1 If demand curve is known, best strategy is fixed price maximizing area of rectangle.
27
Best “Informed” Strategy x D(x) 0 1 1 If demand curve is known, best strategy is fixed price maximizing area of rectangle. Best known uninformed strategy is based on the multi-armed bandit problem...
28
The Multi-Armed Bandit Problem You are in a casino with K slot machines. Each generates random payoffs by i.i.d. sampling from an unknown distribution.
29
0.3 The Multi-Armed Bandit Problem You are in a casino with K slot machines. Each generates random payoffs by i.i.d. sampling from an unknown distribution. You choose a slot machine on each step and observe the payoff. 0.70.4 0.2 0.7 0.30.80.5 0.60.10.4 0.50.10.6
30
The Multi-Armed Bandit Problem You are in a casino with K slot machines. Each generates random payoffs by i.i.d. sampling from an unknown distribution. You choose a slot machine on each step and observe the payoff. Your expected payoff is compared with that of the best single slot machine. 0.30.70.4 0.2 0.7 0.30.80.5 0.60.10.4 0.50.10.6
31
The Multi-Armed Bandit Problem Assuming best play: Ex ante regret = θ(log n) [Lai-Robbins, 1986] Ex post regret = θ(√n) [Auer et al, 1995] Ex post bound applies even if the payoffs are adversarial rather than random. (Oblivious adversary.) 0.30.70.4 0.2 0.7 0.30.80.5 0.60.10.4 0.50.10.6
32
Application to Online Pricing Our problem resembles a multi-armed bandit problem with a continuum of “slot machines”, one for each price in [0,1]. Divide [0,1] into K subintervals, treat them as a finite set of slot machines.
33
Application to Online Pricing Our problem resembles a multi-armed bandit problem with a continuum of “slot machines”, one for each price in [0,1]. Divide [0,1] into K subintervals, treat them as a finite set of slot machines. The existing bandit algorithms have regret O(K 2 log n + K -2 n), provided xD(x) is smooth and has a unique global max in [0,1]. Optimizing K yields regret O((n log n) ½ ).
34
The Continuum-Armed Bandit The continuum-armed bandit problem has algorithms with regret O(n ¾ ), when exp. payoff depends smoothly on the action chosen. Finite- Armed 2 א 0 - Armed Ex Anteθ(log n) O(n¾)O(n¾) Ex Postθ(√n)
35
The Continuum-Armed Bandit The continuum-armed bandit problem has algorithms with regret O(n ¾ ), when exp. payoff depends smoothly on the action chosen. But: Best-known lower bound on regret was Ω(log n) coming from the finite-armed case. Finite- Armed 2 א 0 - Armed Ex Anteθ(log n)Ω(log n) O(n ¾ ) Ex Postθ(√n) ?
36
The Continuum-Armed Bandit The continuum-armed bandit problem has algorithms with regret O(n ¾ ), when exp. payoff depends smoothly on the action chosen. But: Best-known lower bound on regret was Ω(log n) coming from the finite-armed case. We prove: Ω(√n). Finite- Armed 2 א 0 - Armed Ex Anteθ(log n) Ω(√n) O(n ¾ ) Ex Postθ(√n) ?
37
Lower Bound: Decision Tree Setup x D(x) 0 1 1
38
Lower Bound: Decision Tree Setup ½ ¼¾ ⅛⅜⅝⅞ x D(x) 0 1 1 0.3
39
Lower Bound: Decision Tree Setup ½ ¼¾ ⅛⅜⅝⅞ x D(x) 0 1 1 0.2
40
Lower Bound: Decision Tree Setup ½ ¼¾ ⅛⅜⅝⅞ x D(x) 0 1 1 0.4
41
Lower Bound: Decision Tree Setup ½ ¼¾ ⅛⅜⅝⅞ vivi ALGOPTReg. 0.30 0.2000 0.40.1250.30.175 0.1250.60.475
42
How not to prove a lower bound! Natural idea: Lower bound on incremental regret at each level… ½ ¼¾ ⅛⅜⅝⅞
43
How not to prove a lower bound! Natural idea: Lower bound on incremental regret at each level… If regret is Ω(j -½ ) at level j, then total regret after n steps would be Ω(√n). ½ ¼¾ ⅛⅜⅝⅞ 1 √½ √⅓ 1 + √½ + √⅓ + … = Ω(√n)
44
How not to prove a lower bound! Natural idea: Lower bound on incremental regret at each level… If regret is Ω(j -½ ) at level j, then total regret after n steps would be Ω(√n). This is how lower bounds were proved for the finite-armed bandit problem, for example. ½ ¼¾ ⅛⅜⅝⅞ 1 √½ √⅓ 1 + √½ + √⅓ + … = Ω(√n)
45
How not to prove a lower bound! The problem: If you only want to minimize incremental regret at level j, you can typically make it O(1/j). Combining the lower bounds at each level gives only the very weak lower bound Regret = Ω(log n). ½ ¼¾ ⅛⅜⅝⅞ 1 ½ ⅓ 1 + ½ + ⅓ + … = Ω(log n)
46
How to prove a lower bound So instead a subtler approach is required. Must account for the cost of experimentation. We define a measure of knowledge, K D such that regret scales at least linearly with K D. K D = ω(√n) → TOO COSTLY K D = o(√n) → TOO RISKY ½ ¼¾ ⅛⅜⅝⅞
47
Discussion of lower bound Our lower bound doesn’t rely on a contrived demand curve. In fact, we show that it holds for almost every demand curve satisfying some “generic” axioms. (e.g. smoothness)
48
Discussion of lower bound Our lower bound doesn’t rely on a contrived demand curve. In fact, we show that it holds for almost every demand curve satisfying some “generic” axioms. (e.g. smoothness) The definition of K D is quite subtle. This is the hard part of the proof.
49
Discussion of lower bound Our lower bound doesn’t rely on a contrived demand curve. In fact, we show that it holds for almost every demand curve satisfying some “generic” axioms. (e.g. smoothness) The definition of K D is quite subtle. This is the hard part of the proof. An ex post lower bound of Ω(√n) is easy. The difficulty is solely in strengthening it to an ex ante lower bound.
50
Open Problems Close the log-factor gaps in random and worst-case models.
51
Open Problems Close the log-factor gaps in random and worst-case models. What if buyers have some control over the timing of their arrival? Can a temporally strategyproof mechanism have o(n) regret? [Parkes]
52
Open Problems Close the log-factor gaps in random and worst-case models. What if buyers have some control over the timing of their arrival? Can a temporally strategyproof mechanism have o(n) regret? [Parkes] Investigate online posted-price combinatorial auctions, e.g. auctioning paths in a graph. [Hartline]
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.