Download presentation
Presentation is loading. Please wait.
1
Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia
2
Lecture Outline Until now: applications of learning to game theory. Now: merge. The model: –Motivation –The learning game Three levels of generality: –Distributions which are degenerate at one point –Uniform distributions –The general setting ModelDegenerateUniformGeneral
3
Motivation Internet search company: improve performance by learning ranking function from examples. Ranking function assigns real value to every (query,answer). Employ experts to evaluate examples. Different experts may have diff. interests and diff. ideas of good output. Conflict Manipulation Bias in training set. ModelDegenerateUniformGeneral
4
Jaguar vs. Panthera Onca (“Jaguar”, jaguar.com) ModelDegenerateUniformGeneral
5
Regression Learning Input space X=R k ((query,answer) pairs). Function class F:X R (ranking functions). Target function o:X R. Distribution over X. Loss function l (a,b). –Abs. loss: l (a,b)=|a-b|. –Squared loss: l (a,b)=(a-b) 2. Learning process: –Given: Training set S={(x i,o(x i ))}, i=1,...,m, x i sampled from . –R(h)=E x [ l (h(x),o(x))]. –Find: h F to minimize R(h). ModelDegenerateUniformGeneral
6
Our Setting Input space X=R k ((query,answer) pairs). Function class F (ranking functions). Set of players N={1,...,n} (experts). Target functions o i :X R. Distributions i over X. Training set? ModelDegenerateUniformGeneral
7
The Learning Game i: controls x ij, j=1,...,m, sampled w.r.t. i (common knowledge). Private info of i: o i (x ij )=y ij, j=1,...,m. Strategies of i: y’ ij, j=1,...,m. h is obtained by learning S={(x ij,y’ ij )} Cost of i: R i (h)=E x i [ l (h(x),o i (x))]. Goal: Social Welfare (please avg. player). ModelDegenerateUniformGeneral
8
Example: The learning game with ERM Parameters: X=R, F=Constant Functions, l (a,b)=|a-b|, N={1,2}, o 1 (x)=1, o 2 (x)=2, 1 = 2 =uniform dist on [0,1000]. Learning algorithm: Empirical Risk Minimization (ERM) –Minimize R’(h,S)=1/|S| (x,y) S l (h(x),y). 1 2 ModelDegenerateUniformGeneral
9
Degenerate Distributions: ERM with abs. loss The Game: –Players: N={1,...n} – i : degenerate at x i. – i: controls x i. –Private info of i: o i (x i )=y i. –Strategies of i: y’ i. –Cost of i: R i (h)= l (h(x i ),y i ). Theorem: If l = absolute loss and F is convex. Then ERM is group incentive compatible. ModelDegenerateUniformGeneral
10
ERM with superlinear loss Theorem: l is “superlinear”, F is convex, |F| 2, F is not “full” on x 1,...,x n. Then y 1,...,y n such that there is incentive to lie. Example: X=R, F=Constant Functions, l (a,b)=(a-b) 2, N={1,2}. ModelDegenerateUniformGeneral
11
Uniform dist. over samples The Game: –Players: N={1,...n} – i : Discrete uniform on {x i1,...,x im } – i: controls x ij, j=1,...,m –Private info of i: o i (x ij )=y ij. –Strategies of i: y’ ij, j=1,...,m. –Cost of i: R i (h)= R’ i (h,S)= 1/m j l (h(x ij ),y ij ). ModelDegenerateUniformGeneral
12
ERM with abs. loss is not IC ModelDegenerateUniformGeneral 1 0
13
VCG to the Rescue Use ERM. Each player pays j i R’ j (h,S). Each player’s total cost is R’ i (h,S)+ j i R j ’(h,S) = j R’ j (h,S). Truthful for any loss function. VCG has many faults: –Not group incentive compatible. –Payments problematic in practice. Would like (group) IC mechanisms w/o payments. ModelDegenerateUniformGeneral
14
Mechanisms w/o Payments Absolute loss. -approximation mechanism: gives an - approximation of the social welfare. Theorem (upper bound): There exists a group IC 3-approx mechanism for constant functions over R k and homogeneous linear functions over R. Theorem (lower bound): There is no IC (3- )- approx mechanism for constant/hom. lin. functions over R k. Conjecture: There is no IC mechanism with bounded approx. ratio for hom. lin. functions over R k, k 2. ModelDegenerateUniformGeneral
15
Proof of Lower Bound k-1 k k k k k ModelDegenerateUniformGeneral 3 2 1 0 2- 3- 1-
16
Proof of Lower Bound k k-1 k k k ModelDegenerateUniformGeneral 3 2 1 0 2- 3- 1-
17
Generalization Theorem: If f, –(1) i, |R’ i (f,S)-R i (f)| /2 –(2) |R’(f,S)-1/n i R i (f)| /2 Then: –(Group) IC in uniform -(group) IC in general. – -approx in uniform -approx up to additive in general. If F has bounded complexity, m= (log(1/ )/ ), then cond. (1) holds with prob. 1- . Cond. (2) is obtained if (1) occurs for all i. Taking /n adds factor of logn. ModelDegenerateUniformGeneral
18
Discussion Given m large enough, with prob. 1- VCG is -truthful. This holds for any loss function. Given m large enough, abs loss, mechanism w/o payments which is -group IC and 3- approx for constant functions and hom. lin. functions. Most important direction for future work: extending to other models of learning, such as classification. ModelDegenerateUniformGeneral
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.