The experiment based on hier-attention

Slides:



Advertisements
Similar presentations
How computers answer questions An introduction to machine learning Peter Barnum August 7, 2008.
Advertisements

Haitham Elmarakeby.  Speech recognition
Convolutional LSTM Networks for Subcellular Localization of Proteins
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation EMNLP’14 paper by Kyunghyun Cho, et al.
DeepWalk: Online Learning of Social Representations
A Hierarchical Deep Temporal Model for Group Activity Recognition
Convolutional Sequence to Sequence Learning
Unsupervised Learning of Video Representations using LSTMs
Faster R-CNN – Concepts
What Convnets Make for Image Captioning?
End-To-End Memory Networks
CS 388: Natural Language Processing: LSTM Recurrent Neural Networks
Deep learning David Kauchak CS158 – Fall 2016.
Week 3 (June 6 – June10 , 2016) Summary :
Recurrent Neural Networks for Natural Language Processing
Summary of Week 1 (May 23 – May 27, 2016)
Show and Tell: A Neural Image Caption Generator (CVPR 2015)
CNN Demo LIU Pengpeng.
Are End-to-end Systems the Ultimate Solutions for NLP?
Unsupervised Learning and Autoencoders
Using Transductive SVMs for Object Classification in Images
Attention Is All You Need
Deep Learning based Machine Translation
Wei Liu, Chaofeng Chen and Kwan-Yee K. Wong
Paraphrase Generation Using Deep Learning
دانشگاه شهیدرجایی تهران
Final Presentation: Neural Network Doc Summarization
تعهدات مشتری در کنوانسیون بیع بین المللی
Word Embedding Word2Vec.
Word embeddings based mapping
Word embeddings based mapping
Papers 15/08.
Deep Cross-media Knowledge Transfer
Neural Speech Synthesis with Transformer Network
Yi Zhao1, Yanyan Shen*1, Yanmin Zhu1, Junjie Yao2
Project # Investigating the Value versus Privacy Cost of the Internet of Things REU student: Jason Ling Graduate mentors: Safa Bacanli Faculty mentor(s):
The experiments based on CNN
Deep Neural Networks: A Hands on Challenge Deep Neural Networks: A Hands on Challenge Deep Neural Networks: A Hands on Challenge Deep Neural Networks:
Lip movement Synthesis from Text
Unsupervised Pretraining for Semantic Parsing
Natural Language to SQL(nl2sql)
Jacob Devlin Ming-Wei Chang Kenton Lee Kristina Toutanova
RNN Encoder-decoder Architecture
Attention.
Word2Vec.
The experiments based on word-embedding and SVM
Word Embedding 모든 단어를 vector로 표시 Word vector Word embedding Word
Neural Modular Networks
Attention for translation
-- Ray Mooney, Association for Computational Linguistics (ACL) 2014
Gene Structure Prediction Using Neural Networks and Hidden Markov Models June 18, 권동섭 신수용 조동연.
Automatic Handwriting Generation
The Updated experiment based on LSTM
Presented by: Anurag Paul
Word representations David Kauchak CS158 – Fall 2016.
Modeling IDS using hybrid intelligent systems
The experiments based on Recurrent Neural Networks
Question Answering System
Sequence to Sequence Music Generation
REU - End to End Self Driving Car
Baseline Model CSV Files Pandas DataFrame Sentence Lists
Week 3 Presentation Ngoc Ta Aidean Sharghi.
Sequence-to-Sequence Models
Week 7 Presentation Ngoc Ta Aidean Sharghi
Neural Machine Translation by Jointly Learning to Align and Translate
Listen Attend and Spell – a brief introduction
CRCV REU 2019 Week 4.
Visual Grounding.
LSTM Practical Exercise
CRCV REU 2019 Aaron Honculada.
Presentation transcript:

The experiment based on hier-attention 2018-11-20 Raymond ZHAO Wenlong

Content In (iCON) Design project Word embedding in 1st step deep learning like CNN/RNN-LSTM to learn text representations in current step In 3rd step

Text Classification Assign labels to text represent doc with sparse lexical features like n-gram with linear/kernel model deep learning like CNN/RNN-LSTM to learn text representations => Encoder-Decoder Attention mechanism tells where exactly/importantly to look when the neural network is trying to predict parts of a sequence Reference from the “hier-attention” paper

Hier-attention network Attention Model 2016, NACL source code is on github

The paper’s result Performances on diff datasets

The experiment A bit better in ssize dataset, not in RAM dataset

TODO Data preprocessing Pre-Training: Bert Modul by Google => Labels by the experts Pre-Training: Bert Modul by Google => source is in github transfer user reviews (laptop) as user inputs => Text generations ?

Thanks Welcome to join me