The experiment based on hier-attention 2018-11-20 Raymond ZHAO Wenlong
Content In (iCON) Design project Word embedding in 1st step deep learning like CNN/RNN-LSTM to learn text representations in current step In 3rd step
Text Classification Assign labels to text represent doc with sparse lexical features like n-gram with linear/kernel model deep learning like CNN/RNN-LSTM to learn text representations => Encoder-Decoder Attention mechanism tells where exactly/importantly to look when the neural network is trying to predict parts of a sequence Reference from the “hier-attention” paper
Hier-attention network Attention Model 2016, NACL source code is on github
The paper’s result Performances on diff datasets
The experiment A bit better in ssize dataset, not in RAM dataset
TODO Data preprocessing Pre-Training: Bert Modul by Google => Labels by the experts Pre-Training: Bert Modul by Google => source is in github transfer user reviews (laptop) as user inputs => Text generations ?
Thanks Welcome to join me