Download presentation
Presentation is loading. Please wait.
1
The experiment based on hier-attention
Raymond ZHAO Wenlong
2
Content In (iCON) Design project Word embedding in 1st step
deep learning like CNN/RNN-LSTM to learn text representations in current step In 3rd step
3
Text Classification Assign labels to text
represent doc with sparse lexical features like n-gram with linear/kernel model deep learning like CNN/RNN-LSTM to learn text representations => Encoder-Decoder Attention mechanism tells where exactly/importantly to look when the neural network is trying to predict parts of a sequence Reference from the “hier-attention” paper
4
Hier-attention network
Attention Model 2016, NACL source code is on github
5
The paper’s result Performances on diff datasets
6
The experiment A bit better in ssize dataset, not in RAM dataset
7
TODO Data preprocessing Pre-Training: Bert Modul by Google
=> Labels by the experts Pre-Training: Bert Modul by Google => source is in github transfer user reviews (laptop) as user inputs => Text generations ?
8
Thanks Welcome to join me
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.