Distributed Representations of Words and Phrases and their Compositionality Presenter: Haotian Xu
Roadmap Overview The Skip-gram Model with Different Objective Functions Subsampling of Frequent Words Learning Phrases
CNN for Text Classification
Word2vec: Google’s Word Embedding Approach What is word2vec? Word2vec turns text into a numerical form that deep nets can understand. Its input is a text corpus and its output is a set of vectors: feature vectors for words in that corpus. Assumption of word2vec Word2vec assumes that if two words always appear together then their word embeddings should be similar(cosine similarity).
Skip-gram Model An efficient method for learning high quality vector representations of words from large amounts of unstructured text data
Skip-gram Model Objective: To maximize the average log probability While, p(wt+j|Wt) is defined by softmax
Skip-gram Model Computationally Efficient Approximations Hierarchical Softmax
Skip-gram Model Computationally Efficient Approximations Negative Sampling
Subsampling of Frequent Words In very large corpora, the most frequent words can easily occur hundreds of millions of times (e.g.,“in”, “the”, and “a”). Such words usually provide less information value than the rare words.
Vector Representations of Words Vec(“Paris”) - Vec(“France”) ≈ Vec(“Berlin”) – Vec(“Germany”)
Vector Representations of Words Analogical Reasoning task Semantic analogies “Germany” : “Berlin” :: “France” : ? Syntactic analogies “quick” : “quickly” :: “slow” : ?
Vector Representations of Words The word and phrase representations learned by the Skip-gram model exhibit a linear structure that makes it possible to perform precise analogical reasoning using simple vector arithmetic
Learning Phrases Bigrams Run 2-4 passes to get longer phrases
Learning Phrases
Additive Compositionality The Skip-gram representations exhibit another kind of linear structure that makes it possible to meaningfully combine words by an element-wise addition of their vector representations
Comparison to Published Word Representations
Any Questions?