Download presentation
Presentation is loading. Please wait.
Published byAusten Horn Modified over 9 years ago
1
ValuePick : Towards a Value-Oriented Dual-Goal Recommender System Leman Akoglu Christos Faloutsos OEDM in conjunction with ICDM 2010 Sydney, Australia
2
Recommender Systems Traditional recommender systems try to achieve high user satisfaction 2 of 19
3
Dual-goal Recommender Systems Dual-goal recommender systems try to achieve (1) high user satisfaction as well as (2) high-“value” vendor gain -“value” Trade-off user satisfaction vs. vendor profit 3 of 19
4
vertices ranked by proximity v253 v162 v261 v327...... Dual-goal Recommender Systems network-“value” 4 of 19 query vertex
5
v253 v162 v261 v327...... Dual-goal Recommender Systems vertices ranked by proximity network-“value” 5 of 19
6
Dual-goal Recommender Systems v253 v162 v261 v327...... network-“value” vertices ranked by proximity network-“value” Trade-off user satisfaction vs. network connectivity 6 of 19
7
Vendor Main concerns: We cannot make the highest value recommendations Recommendations should still reflect users’ likes relatively well Dual-goal Recommender Systems 7 of 19 User
8
Vendor Carefully perturb (change the order of) the proximity-ranked list of recommendations Controlled by a tolerance for each user ValuePick: Main idea ζ ζ 8 of 19
9
ValuePick Optimization Framework “value” proximity Total expected gain (assuming proximity ~ acceptance prob.) tolerance [0,1] average proximity score of original top-k 9 of 19 DETAILS
10
ValuePick ~ 0-1 Knapsack value maximum weight W allowed weight of item i We use CPLEX to solve our integer programming optimization problem 10 of 19 DETAILS
11
Pros and Cons of ValuePick Cons: In marketing, it is hard to predict the effect of an intervention in the marketing scheme, i.e., not clear how users will respond to ‘adjustments’ Pros: Tolerance ζ can flexibly (and even dynamically) control the `level-of-adjustment’ Users rate same item differently at different times, i.e., users have natural variability in their decisions. 11 of 19
12
Experimental Setup I Two real networks Netscience – collaboration network DBLP – co-authorship network Four recommendation schemes: 1) No Gain Optimization (ζ = 0) 2) ValuePick (ζ = 0.01, ζ = 0.02) 3) Max Gain Optimization (ζ = 1) 4) Random “value” is centrality 12 of 19
13
Experimental Setup II Given a recommendation scheme s At each step T For each node i Make a set K of recommendations to node i using s Node i links to node jK with prob. proximity( i, j ) Re-compute proximity and centrality scores Simulation steps: We use k =5 and T =30 13 of 19
14
Comparison of schemes ValuePick provides a balance between user satisfaction (high E), and vendor gain (small diameter). EXPERIMENTS 14 of 19
15
Recommend by heuristic Simple perturbation heuristics do not balance user satisfaction and vendor gain properly. EXPERIMENTS 15 of 19
16
Computational complexity EXPERIMENTS 16 of 19 Making k ValuePick recommendations to a given node involves: 1 - finding PPR scores O(#edges) 2 - solving ValuePick optimization w/ CPLEX 1/10 sec. to solve among top 1K nodes
17
Conclusions Problem formulation: incorporate the “value” of recommendations into the system Design of ValuePick : parsimonious single parameter ζ flexible adjust ζ for each user dynamically general use any “value” metric Performance study : experiments to show proper trade of user acceptance in exchange for higher gain CPLEX with fast solutions 17 of 19
18
User Vendor ζ THANK YOU www.cs.cmu.edu/~lakoglu lakoglu@cs.cmu.edu 18 of 19
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.