Using Game Reviews to Recommend Games Michael Meidl, Steven Lytinen DePaul University School of Computing, Chicago IL Kevin Raison Chatsubo Labs, Seattle WA
Recommender Systems are Everywhere
Our Task Provide a game player with recommendations of games s/he has not played (and will like) Recommendations are based on two sources of information: Corpus of game reviews (free-form text) Knowledge about which games a user already likes (user’s numerical rankings)
Review of Assassin’s Creed Unity “its complex Abstergo storyline has long since jumped the shark… the story is much darker in tone than anything else in the series…hard to get bored… the attention to detail … is nothing short of astonishing” 6 out of 10 -Mark Walton What else will Mark like?
Recommender System Techniques 1.Collaborative-based – System compares you to other users, and recommends what they’ve liked or bought – May know nothing else about the products or other items that they recommend Amazon, Barnes&Noble, CDW, …
?? Collaborative Example (Candeliier et al., 2007)
Recommender System Techniques 2.Content-based – System uses information about the items it recommends (e.g., recommend books by the same author, or same genre) – Might not use information about other customers/users
Content-based Example Tom Hanks Daisy Ridley DramaSciFiComedyDid I like it? Movie 1xxx Movie 2xx Movie 3xxxxx Movie 4XX ???
Recommender System Techniques 3.Hybrid: some combination of collaborative- based and content-based
Our game recommender 1.Content-based: A game “representation” is based on the (free-form text) reviews written by a community of users 2.User profile is based on a small sample of items liked by the user
Corpus Reviews from 400,000 reviews of 8279 different games Mixture of professional reviews and user reviews
Representing games Representation of each game is constructed from a corpus of free- form text reviews of games Games represented as vectors Vector features are based on co- occurrence of word pairs: adjectives and “context words”
Vector space model Originated in information retrieval Task: judge “similarity” of documents (e.g., game reviews) Document representation: bag of words
Vector space model 1.Build a vocabulary – terms which are “important” in the collection of documents 2.Build the document representations – What terms from the vocabulary appear in the document, and how frequently relative to other documents? 3.Starting with a document, what others are similar?
Vocabulary: [story, plot, animation, interest, bore, astonish, series, complex,….] “its complex Abstergo storyline has long since jumped the shark… the complex story is much darker in tone … hard to get bored… the attention to detail … is nothing short of astonishing” Vector: [1, 0, 0, 0, 1, 1, 1, 2, …]
Vector space, cont. Vector values are typically “normalized” to account for a document’s length, the frequency of each term across documents,… Documents are similar if their vectors are similar [1, 0, 0, 0, 1, 1, 1, 2] [1, 1, 0, 1, 2, 1, 2, 2] similar [0, 1, 2, 1, 0, 3, 0, 0] dissimilar
Feature space 700 adjectives were chosen as most relevant to the description of games (Zagal and Tomuro 2010) Bootstrapping approach, began with adjectives modifying “gameplay” “context words”: words that appear in a window of +- 2 words from an adjective Over 3,500,000 adjective-context word pairs Unworkable feature space size
Feature space 700 adjectives were chosen as most relevant to the description of games (Zagal and Tomuro 2010) Bootstrapping approach, began with adjectives modifying “gameplay” “context words”: words that appear in a window of +- 2 words from an adjective Over 3,500,000 adjective-context word pairs Unworkable feature space size
Feature space 700 adjectives were chosen as most relevant to the description of games (Zagal and Tomuro 2010) Bootstrapping approach, began with adjectives modifying “gameplay” “context words”: words that appear in a window of +- 2 words from an adjective Over 3,500,000 adjective-context word pairs Unworkable feature space size
Reduction of Feature Space Using Co-clustering Simultaneously cluster two sets of related items while minimizing loss of mutual information (Dhillon, Mellela and Mohdha 2003) In our case, a set of adjectives X and a set of “context words” Y Input: X,Y Output: X’ = {X 1 X 2, …, X m }, a partition of X Y’ = {Y 1, Y 2, … Y n } a partition of Y
Reduction of Feature Space Using Co-clustering Simultaneously cluster two sets of related items while minimizing loss of mutual information (Dhillon, Mellela and Mohdha 2003) In our case, a set of adjectives X and a set of “context words” Y Input: X,Y Output: X’ = {X 1 X 2, …, X m }, a partition of X Y’ = {Y 1, Y 2, … Y n } a partition of Y
Reduction of Feature Space Using Co-clustering Simultaneously cluster two sets of related items while minimizing loss of mutual information (Dhillon, Mellela and Mohdha 2003) In our case, a set of adjectives X and a set of “context words” Y Input: X,Y Output: X’ = {X 1 X 2, …, X m }, a partition of X Y’ = {Y 1, Y 2, … Y n } a partition of Y
Representation of Games Collection of reviews for a game were treated as one “document” Games represented as vectors Vector feature = pair of (adjective cluster) and (context word cluster) Frequency of co-occurrence of clusters were counted, and weighted in various ways
Recommending games G = games already liked by a user G’ = all games user has already played (including disliked ones) S = “seeds” – a small subset of G N = games that user does not know R = games that our system recommends
Recommending games R = the k games in N with minimum distance from any of the members of S |R| = k
Evaluation “Live” testing was not available to us Instead, offline testing: Recommend k games (|R| = k) in G’ – S Find overlap between R and G
Evaluation We conducted a n-fold cross- validation of our system’s performance Number of folds n = |G’| / |S| Partition G’ into G’/|S| folds Measure performance n times for each S
Evaluation We conducted a n-fold cross- validation of our system’s performance Number of folds n = |G’| / |S| Partition G’ into G’/|S| folds Measure performance n times for each S
Evaluation We measured performance in terms of precision precision = |R ∩ (G-S)| / |R| Precision tends to be highest for small k and decrease as k increases
Evaluation We measured performance in terms of precision precision = |R ∩ (G-S)| / |R| Precision tends to be highest for small k and decrease as k increases
Evaluation We also varied: Weighting techniques for features Dimensionality of co- clustering
Feature Weighting Most common: tf-idf Document frequency = # of documents in which a cluster pair appears Term frequency (cluster pairs) is multiplied by the inverse of the document frequency
Other Feature Weighting tf: “raw” co-occurrence counts tf-normc: normalize frequency across documents (“column- wise” normalization) boolean: feature value is 1 if cluster pair appears, 0 if not
Results: Feature Weighting
Results: Co-cluster dimensions
Results: Co-clustering vs. “Bag of words”
Conclusions Representation of games using approach based on adjective – context word pairs produces high quality recommendations Precision of first recommendation is 85-90%
Conclusions Precision is approximately 80% even for 10 recommendations Co-clustering technique dramatically reduces feature space while maintining high precision Dimensionality reduced from 3,500,000 to 1,000 in 10 x 100 co- clustering