Objectives Objectives Recommendz: A Multi-feature Recommendation System Matthew Garden, Gregory Dudek, Center for Intelligent Machines, McGill University Recommendz: A Multi-feature Recommendation System Matthew Garden, Gregory Dudek, Center for Intelligent Machines, McGill University Introduction Applications Personalized recommendation regarding movies to see, books to buy, web sites to visit, cars, music, almost any product or service. Market segments: web-based shopping (what to buy), on-line technical support (where to find help), research and teaching assistance (e.g books to read), diagnosis/troubleshooting (what problem is likely). Applications Personalized recommendation regarding movies to see, books to buy, web sites to visit, cars, music, almost any product or service. Market segments: web-based shopping (what to buy), on-line technical support (where to find help), research and teaching assistance (e.g books to read), diagnosis/troubleshooting (what problem is likely). Approach Results Conclusions A recommender system provides suggestions to a user based on examples of what they like and dislike. Our system, recommendz, is unique is exploiting information on why users like and dislike items. This extra information allows much better recommendations, as well as allowing the system to give the rationale behind the recommendations. Collaborative filtering refers to making recommendations by matching users to other users, and then exploiting a transitive relationship between users and the items they rate: User A is similar to User B, and User B likes item C, therefore A will like Item C. An alternative methodology for recommendation systems is to employ content (or item) based filtering by directly matching items without explicitly referring to other users: User A likes Item D, Item D is similar to Item C, therefore User A will like Item C. In either case, the similarity relationship between users or items is crucial, yet must be inferred based on very sparse data. I thought that movie was pretty good. There was a lot of action and special effects, which is great. It's just too bad that the romantic subplot was underdeveloped. Overall rating: 8 action quantity 8 opinion 5 romantic subplot quantity 2 opinion -4 Users provide the same information that would be in a verbal review, using a structured format. This particular demonstration is targeted to movie recommendations, although our approach is domain-independent. Users provide feedback about several movies they have seen, telling us what they thought of them and what characteristics of the movie were important to them. From this information we compute a list of recommendations. Each recommendation is given a predicted rating on a scale of 1 to 10. To allow users to select characteristics that are important, one must either have a manually-selected list chosen in advance, or allow the set of choices to expand dynamically. This later approach is more flexible but makes choosing the features to display much more complicated. The screenshot below shows the web-based interface, including new items, recommendations, and the list of other titles which can be selected and rated by the user. Our work shows that the use of supplementary descriptive features substantially improves the quality of recommendations over a basic collaborative filter in the domain of movies. A key impediment to the use of descriptive features is the need to determine the set of be used and the subset to be presented to a user. We have discussed how this problem can be solved using a combination of algorithmic strategies. Of all ratings made through our system, 58% have used 3 or more features. This suggests that users find our feedback system useful and even entertaining in itself! While we seem to outperform traditional collaborative filtering schemes, it appears we can do still better by making more use of the feature information. In ongoing work, we are examining how to directly exploit feature information to develop algorithmic variations akin to content-based filtering. Performance of our recommendation system was tested using leave-one-out cross validation: the ability to predict known data was evaluated across all users and items. This is measured as mean error on the vertical axes below. We examined the performance of several variants of our algorithm, and compared to standard collaborative filtering (CF). The variants were: Pure opinion and Pure quantity, which use only the strength of the opinions about a feature and only the strength of the feature respectively, Features only which uses all feature information (but not the overall rating), and All which uses all available information. Note that our method(s) consistently outperform standard collaborative filtering. The algorithm variations below (CF+, Opinion+, Quantity+, and Features+) show the recommendation error for hybrid combinations for the algorithms. The results are slightly better than the variants above.