Download presentation
Presentation is loading. Please wait.
Published byMargaret Cunningham Modified over 6 years ago
1
Distributed Representations of Words and Phrases and their Compositionality
Presenter: Haotian Xu
2
Roadmap Overview The Skip-gram Model with Different Objective Functions Subsampling of Frequent Words Learning Phrases
3
CNN for Text Classification
4
Word2vec: Google’s Word Embedding Approach
What is word2vec? Word2vec turns text into a numerical form that deep nets can understand. Its input is a text corpus and its output is a set of vectors: feature vectors for words in that corpus. Assumption of word2vec Word2vec assumes that if two words always appear together then their word embeddings should be similar(cosine similarity).
5
Skip-gram Model An efficient method for learning high quality vector representations of words from large amounts of unstructured text data
6
Skip-gram Model Objective: To maximize the average log probability
While, p(wt+j|Wt) is defined by softmax
7
Skip-gram Model Computationally Efficient Approximations
Hierarchical Softmax
8
Skip-gram Model Computationally Efficient Approximations
Negative Sampling
9
Subsampling of Frequent Words
In very large corpora, the most frequent words can easily occur hundreds of millions of times (e.g.,“in”, “the”, and “a”). Such words usually provide less information value than the rare words.
10
Vector Representations of Words
Vec(“Paris”) - Vec(“France”) ≈ Vec(“Berlin”) – Vec(“Germany”)
11
Vector Representations of Words
Analogical Reasoning task Semantic analogies “Germany” : “Berlin” :: “France” : ? Syntactic analogies “quick” : “quickly” :: “slow” : ?
12
Vector Representations of Words
The word and phrase representations learned by the Skip-gram model exhibit a linear structure that makes it possible to perform precise analogical reasoning using simple vector arithmetic
13
Learning Phrases Bigrams Run 2-4 passes to get longer phrases
14
Learning Phrases
15
Additive Compositionality
The Skip-gram representations exhibit another kind of linear structure that makes it possible to meaningfully combine words by an element-wise addition of their vector representations
16
Comparison to Published Word Representations
17
Any Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.