Presentation is loading. Please wait.

Presentation is loading. Please wait.

Following the User’s Interest in Context-Based Recommender Systems

Similar presentations


Presentation on theme: "Following the User’s Interest in Context-Based Recommender Systems"— Presentation transcript:

1 Following the User’s Interest in Context-Based Recommender Systems
Djallel Bouneffouf SAMOVAR (CNRS) Team ACMES PhD Supervisor: Alda Lopes Gançarski PhD Supervisor/Director: Amel Bouzeghoub Company Manager: Fabrice Jarry 28 March 2012

2 outline Introduction State of the art Proposition
Experimental evaluation Conclusion

3 outline Introduction State of the art Proposition
Experimental evaluation Conclusion

4 Access and navigation into the corporate data
Software editor Access and navigation into the corporate data 1ère application Mobile générique se connectant à des systèmes d’information hétérogènes 4

5 mobile information systems Context

6 mobile information systems Context
Context-based Recommender System To reduce search and navigation time To assist users in finding information

7 problems in Context-based Recommender System
How to recommend information to users taking into account their surrounding environment (location, time, near people)? How to follow the evolution of user’s interest? problems in Context-based Recommender System Item Inventory Articles, web page, documents, … Context location, time, … USER Contextual Recommender System algorithm: Selects item(s) to show Gets feedback (click, time spent,..) Refine the models Repeats (large number of times) with an Optimization of metrics of interest (Total number of clicks, Total revenu,…)

8 outline Introduction State of the art Proposition
Experimental evaluation Conclusion

9 THE USER OR THE EXPERT SPECIFICATION
Advantage Context management Constraints Laborious Not a dynamic system Not a personalized system Reference [Panayiotou ,2006] [Bila ,2008] [Bellotti [Dobson,2005] [Lakshmish,2009] [Alexandre de Spindler [Mieczysław ,2009] [Wei ,2010] [Lihong

10 Content-Based and Collaborative filtering
Dataset Situation Reward Action Social group Meeting Home Drive Office Advantage Context management Automatic process Constraints Cold start problem Slow training Reference [Panayiotou ,2006] [Bila ,2008] [Bellotti, 2008] [Dobson, 2005] [Lakshmish, 2009] [Alexandre de Spindler, 2006] [Mieczysław ,2009] [Wei ,2010] [Lihong

11 Machine learning Reinforcement learning Displays 12 Clicks 7 5 2 1
Documents D1 D2 D3 D4 D5 D6 D8 D9 D10 Displays 12 Clicks 7 5 2 1 Advantage Solve cold start problem Automatic process Follow the evolution of user interest Constraints No context management Slow training 1 2 2 1 1 1 Exploration Exploitation mean= 0.48 mean= 0.79 - The greedy strategy only exploitation; - The ε-greedy strategy adds some random action. Reference [Panayiotou ,2006] [Bellotti ,2008] [Bila [Dobson,2005] [Lakshmish, 2009] [Alexandre de Spindler, 2006] [Mieczysła,2009] [Wei ,2010] [Lihong

12 Content and Collaborative filtering Slow learning Cold start problem
State of the art Learning Profile The user or The expert specificati-on Content and Collaborative filtering Reinforcement learning Reference [Panayiotou ,2006] [Bila, 2008] [Bellotti, 2008] [Dobson, 2005] [Lakshmish, 2009] [Alexandre de Spindler, 2006] [Mieczysła, 2009] [Wei ,2010] [Lihong ,2010] Context management + - Semantic Context representation Content-based Automatic process Follow the evolution of user interest Solve the cold start Content and Collaborative filtering Slow learning Cold start problem The expert specifies the user’s behavior Not a personalized system Not a dynamic system The user specifies his behavior Laborious

13 outline Introduction State of the art Proposition
Experimental evaluation Conclusion

14 Multi-armed bandits (MAB)
Recommander system Arms  documents Rewards clicks A (basic) MAB problem has: Set D of posibilities (arms) CTR(d) ∈ [0,1] expected rewards for each d∈D In each round : algorithm picks arm d∈D based on past history Reward: independent sample in [0,1] with expectation CTR (d) Classical setting that models exploration/exploitation trade-off Documents D1 D2 D3 D4 D5 D6 D8 D9 D10 CTR 0.6 0.4 0.3 0.5

15 Contextual Bandits Context-based Recommender System
Contexts  User’s situations Arms  documents Rewards  clicks In each round : X is a set of situations, D is a set of arms, CTR: X x D  [0,1] expected rewards Situation x ∈ X arrives Algorithm picks arm d ∈ D Rewards : independent sample in [0,1] with expectation CTR(x, d) Documents D1 D2 D3 D4 D5 D6 D8 D9 D10 CTR 0.2 0.4 0.3 Situation Meeting Home Drive Office x1 Documents D1 D2 D3 D4 D5 D6 D8 D9 D10 CTR 0.6 0.4 0.3 0.5 x2 1 2 Documents D1 D2 D3 D4 D5 D6 D8 D9 D10 CTR 0.2 0.1 0.3 0.7 x3

16 Get situation from Context Sensing
Mon Oct 3 12:10: GPS " , " NATIXIS

17 Get situation from Context Thinking Abstraction
Mon Oct 3 12:10: GPS " , " NATIXIS Time Ontology Location Ontology Social Ontology situation time location social x1 workday Paris Bank

18 Get situation from Context Retrieving the relevant situation
IDS Users Time Place Client 1 Paul Workday Paris NATIXIS RetrieveSituation IDS Users Time Place Client 1 Paul 11/05/2011 Paris BNP IDS Users Time Place Client 1 Paul Workday Paris BNP 2 Fabrice Holyday Evry MGET 3 Gentilly AMUNDI Location Ontology Time Ontology Social Ontology

19 Select Documents Hybrid-ε-greedy CBR-ε-greedy argmaxd(CTR(d)) p(1-ε)
CBF (d) gives the documents similar to the document d Documents d1 d2 d3 d4 d5 d6 d8 d9 d10 CTR 0.6 0.2 0.4 ε is the probability of exploration A rich semantic representation of the user interaction situations as concepts from social, location and time ontologies with their corresponding user's interests To follow the user’s interest evolution we propose to use case-based reasoning techniques to detect the situation of the user and then we apply ε -greedy strategy improved by content based filtering argmaxd(CTR(d)) p(1-ε) dt = Random(D) p(ε) CBR-ε-greedy argmaxd(CTR(d)) p(1-ε) dt = CBF (d) p(z) Random(D) p(k) Hybrid-ε-greedy ε = z+k

20 outline Introduction State of the art Proposition
Experimental evaluation Conclusion

21 Experimental Datasets
diary situations navigations entries IDS Users Time Place Client 1 Paul 11/05/2011 Paris AFNOR 2 Fabrice 15/05/2011 Evry MGET 3 19/05/2011 Gentilly AMUNDI Diary situation entries IdDoc IDS Click Time Interest Documents 1 2 2 min 3/5 Demand 3 3 min 1/5 Contact 50 sec null Person Diary navigation entries

22 ε variation on learning ε variation on deployment
Recommend documents ε- Variation ε variation on learning We randomly divided the entries set into two subsets: Learning subset: learn and estimate document CTR. Deployment subset : where system greedily recommends documents to users using CTR estimates. ε variation on deployment ε is the probability of exploration

23 Data size on deployment
recommend documents Data size variation Data size on learning Data size on deployment

24 Conclusion Our experiments yield to the conclusion that :
Considering the user’s context for the exploration/exploitation strategy significantly increases the performance of the recommender system. In the future: We plan to investigate methods that automatically learn the optimal exploitation and exploration trade-off.


Download ppt "Following the User’s Interest in Context-Based Recommender Systems"

Similar presentations


Ads by Google