Download presentation
Presentation is loading. Please wait.
Published byPenelope Pearson Modified over 6 years ago
1
Visualizing and Understanding Neural Models in NLP
Jiwei Li, Xinlei Chen, Eduard Hovy and Dan Jurafsky Presentation by Rohit Gupta Roll No
2
Motivation Vector-based models produced by applying neural networks to natural language are very difficult to interpret
3
Dataset used Stanford Sentiment Treebank dataset
sentiment labels for every parse tree constituent, from sentences to to individual words, for 11,855 sentences
4
Models studied Standard recurrent sequence models with TANH activation functions LSTMs: Long Short Term Memory Bidirectional LSTMs
5
Visualizations Compositionality
Negation Intensification Concessive Salience (contribution of unit to the final composed meaning) Gradient back-propagation The variance of a token from the average word node LSTM-style gates that measure information flow
6
Local compositionality
7
t-SNE visualization on representations for negation
8
Concessive clause composition
9
Salience Method 1: First derivatives
10
Salience method 2: Variance of word from sentence mean
11
Salience method 3: Gate models
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.