Download presentation
Presentation is loading. Please wait.
Published byClinton Lane Modified over 9 years ago
1
Linguistic Regularities in Sparse and Explicit Word Representations Omer LevyYoav Goldberg Bar-Ilan University Israel
2
Papers in ACL 2014* * Sampling error: +/- 100%
3
Neural Embeddings
4
Representing words as vectors is not new!
5
Explicit Representations (Distributional)
6
Questions Are analogies unique to neural embeddings? Compare neural embeddings with explicit representations Why does vector arithmetic reveal analogies? Unravel the mystery behind neural embeddings and their “magic”
7
Background
8
Mikolov et al. (2013a,b,c) Neural embeddings have interesting geometries
10
Mikolov et al. (2013a,b,c) Neural embeddings have interesting geometries These patterns capture “relational similarities” Can be used to solve analogies: man is to woman as king is to queen
11
Mikolov et al. (2013a,b,c)
19
Are analogies unique to neural embeddings?
20
Experiment: compare embeddings to explicit representations Are analogies unique to neural embeddings?
21
Experiment: compare embeddings to explicit representations
22
Are analogies unique to neural embeddings? Experiment: compare embeddings to explicit representations Learn different representations from the same corpus:
23
Are analogies unique to neural embeddings?
24
Analogy Datasets
25
Embedding vs Explicit (Round 1)
26
Many analogies recovered by explicit, but many more by embedding.
27
Why does vector arithmetic reveal analogies?
35
royal?female?
36
What does each similarity term mean? Observe the joint features with explicit representations! uncrownedElizabeth majestyKatherine secondimpregnate ……
37
Can we do better?
38
Let’s look at some mistakes…
42
The Additive Objective
47
Problem: one similarity might dominate the rest Much more prevalent in explicit representation Might explain why explicit underperformed
48
How can we do better?
49
Instead of adding similarities, multiply them!
50
How can we do better?
52
Embedding vs Explicit (Round 2)
53
Multiplication > Addition
54
Explicit is on-par with Embedding
55
Embeddings are not “magical” Embedding-based similarities have a more uniform distribution The additive objective performs better on smoother distributions The multiplicative objective overcomes this issue
56
Conclusion Are analogies unique to neural embeddings? No! They occur in sparse and explicit representations as well. Why does vector arithmetic reveal analogies? Because vector arithmetic is equivalent to similarity arithmetic. Can we do better? Yes! The multiplicative objective is significantly better.
57
More Results and Analyses (in the paper) Evaluation on closed-vocabulary analogy questions (SemEval 2012) Experiments with a third objective function (PairDirection) Do different representations reveal the same analogies? Error analysis A feature-level interpretation of how word similarity reveals analogies
59
Agreement Objective Both Correct Both Wrong Embedding Correct Explicit Correct MSR43.97%28.06%15.12%12.85% Google57.12%22.17%9.59%11.12%
61
Error Analysis: Default Behavior A certain word acts as a “prototype” answer for its semantic type Examples: daughter for feminine answers Fresno for US cities Illinois for US states Their vectors are the centroid of that semantic type
62
Error Analysis: Verb Inflections In verb analogies: walked is to walking as danced is to… ? The correct lemma is often found ( dance ) But with the wrong inflection( dances ) Probably an artifact of the window context
63
The Iraqi Example
65
The Additive Objective
66
The Iraqi Example (Revisited)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.