Download presentation
Presentation is loading. Please wait.
Published byLucas Perry Modified over 6 years ago
1
WSRec: A Collaborative Filtering Based Web Service Recommender System
Zibin Zheng, Hao Ma, Michael R. Lyu, and Irwin King Department of Computer Science & Engineering The Chinese University of Hong Kong Hong Kong, China ICWS 2009, Los Angeles, CA, US, July 6-10, 2009
2
Outlines 1. Introduction 2. System Architecture
3. Recommendation Algorithm 4. Implementation and Experiments 5. Conclusion and Future Work
3
1. Introduction
4
1.1 Web Services Self-description Loosely-coupled Highly-dynamic
Cross-domain Compositional nature
5
1.2 Web Service Selection Target: determining the optimal Web service from a set of functionally equivalent service candidates. Method 1: evaluating all the candidates to obtain their QoS performance. Weak points of Method 1: Expensive Requiring a lot of Web service invocations Time-consuming A large number of candidates to evaluate Inaccurate Users are not experts on WS evaluation
6
1.3 Collaborative Filtering for WS
Method 2: collaborative filtering algorithm Predicting Web service QoS performance by employing QoS performance of similar users. Advantages: Less WS invocations: no need to evaluate all the candidates. Web service recommendation: discover potential Web services. User recommendation: discover potential service users. Challenges: How to obtain WS QoS data from different users? How to refine traditional collaborative filtering algorithm for Web services? How to verify the QoS prediction results?
7
1.4 Contributions A systematic user-collaborative mechanism for sharing QoS data of Web services. An effective hybrid collaborative filtering algorithm for Web service QoS value prediction. A large-scale real-world experiment. 150 service users in more than 20 countries. 100 real-world Web services 1.5 millions Web service invocations.
8
2. System Architecture
9
2.1 System Architecture Movie recommendation: the movie ratings of users can be obtained from a commercial or experimental Web system like Amazon and MovieLens. Web service recommendation: Web services are distributed over the Internet. Service users are usually isolated with each other. The current Web service architecture does not provide any mechanism for the Web service QoS data sharing. How to obtain Web service QoS data from different service users?
10
2.1 System Architecture Input: QoS data of some WS -
Output: QoS Prediction of all the WS YouTube: sharing videos. Wikipedia: sharing knowledge. WSRec: sharing QoS data of Web services.
11
3. Recommendation Algorithm
12
3.1 Similarity Computation
User-item matrix: M×N, each entry is a QoS vector. Pearson Correlation Coefficient (PCC) Equation 1: PCC similarity between user a and user u. 2 2 The similarity between u2 and u4 equals to 1. 4 4
13
3.2 Similarity Significance Weight
PCC often overestimates the similarities of service users who are actually not similar but happen to have similar QoS experience on a few co-invoked Web services Similarity significance weight reduces the influence of a small number of similar co-invoked items. Does these two users really similar?
14
3.3 Similar Neighbors Selection
For every entry ru,i in the matrix, a set of similar users S(u) towards user u can be found by: A set of similar items S(i) towards item i can be found by: where T(u) is a set of the Top K similar users to the user u, and T(i) is a set of the Top K similar items to the item i.
15
3.4 Missing Value Prediction
Given a missing value ru,I, if S(u) ≠null and S(i) ≠null, the prediction of missing QoS value is defined as: if S(u) = null and S(i) = null,
16
3.4 Missing Value Prediction
Given a missing value ru,I, if S(u) ≠null and S(i) = null, the prediction of missing QoS value is defined as: Given a missing value ru,I, if S(u) = null and S(i) ≠ null, the prediction of missing QoS value is defined as:
17
3.5 Confidence Weight Confidence weight:
Similar neighbors S(u) = {1, 1, 1} is likely to provide more accurate prediction than S(u) = {0.1, 0.1, 0.1}. Higher value indicates higher confidence on the predicted value.
18
3.6 Value of λ λ (0 ≤ λ ≤ 1) is employed to determine how the QoS value prediction relies on the user-based method or the item-based method. Different datasets may have their own data distribution natures.
19
4. Experiments
20
4.1 Experimental Setup A list of 21,197 publicly available Web services A total of 343,917 Java Classes are generated for 18,102 Web services.. Randomly select 100 Web services for conducting experiments (located in more than 20 countries). 150 computer nodes from Planet-Lab (distributed in more than 20 countries) 1.5 millions Web service invocations
21
4.1 Experimental Setup PlanetLab ( is a global research network, which consists of 1016 distributed computers.
22
4.2 Metrics : the expected QoS value. : the predicted QoS value
N: the number of predicted values.
23
4.3 Experimental Results
24
4.3 Experimental Results
25
4.3 Experimental Results
26
4.3 Experimental Results Further information and the detailed Web service QoS dataset is available in
27
5. Conclusion and Future Work
28
5.1 Conclusion and Future Work
A user-collaborative framework for WS QoS data sharing A hybrid collaborative filtering algorithm for Web service QoS value prediction A large-scale real-world experiment Future work Investigation of more QoS properties Experiments with more service users on more real-world Web services
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.