Representation Learning with Deep Auto-Encoder

Slides:



Advertisements
Similar presentations
Greedy Layer-Wise Training of Deep Networks
Advertisements

INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
CS590M 2008 Fall: Paper Presentation
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
Deep Learning.
Neural Networks I CMPUT 466/551 Nilanjan Ray. Outline Projection Pursuit Regression Neural Network –Background –Vanilla Neural Networks –Back-propagation.
Lecture 14 – Neural Networks
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
How to do backpropagation in a brain
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Deep Belief Networks for Spam Filtering
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Artificial Neural Networks
Submitted by:Supervised by: Ankit Bhutani Prof. Amitabha Mukerjee (Y )Prof. K S Venkatesh.
Autoencoders Mostafa Heidarpour
Image Denoising and Inpainting with Deep Neural Networks Junyuan Xie, Linli Xu, Enhong Chen School of Computer Science and Technology University of Science.
Machine Learning Chapter 4. Artificial Neural Networks
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Classification / Regression Neural Networks 2
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Dynamic Background Learning through Deep Auto-encoder Networks Pei Xu 1, Mao Ye 1, Xue Li 2, Qihe Liu 1, Yi Yang 2 and Jian Ding 3 1.University of Electronic.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Variational Autoencoders Theory and Extensions
Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham.
Learning Deep Generative Models by Ruslan Salakhutdinov
Deep Learning Amin Sobhani.
Byoung-Tak Zhang Biointelligence Laboratory
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
ECE 5424: Introduction to Machine Learning
Computer Science and Engineering, Seoul National University
A practical guide to learning Autoencoders
INTRODUCTION TO Machine Learning 3rd Edition
Representation Learning: A Review and New Perspectives
Article Review Todd Hricik.
Matt Gormley Lecture 16 October 24, 2016
Restricted Boltzmann Machines for Classification
Marshall Wang Dept. of Statistics, NC State University
ICS 491 Big Data Analytics Fall 2017 Deep Learning
Deep Learning Yoshua Bengio, U. Montreal
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Neural networks (3) Regularization Autoencoder
Structure learning with deep autoencoders
Dipartimento di Ingegneria «Enzo Ferrari»
Classification / Regression Neural Networks 2
Deep Networks for Manifold Data
Department of Electrical and Computer Engineering
Prof. Carolina Ruiz Department of Computer Science
Deep learning Introduction Classes of Deep Learning Networks
ECE 599/692 – Deep Learning Lecture 9 – Autoencoder (AE)
Goodfellow: Chapter 14 Autoencoders
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
Artificial Intelligence Chapter 3 Neural Networks
Autoencoders hi shea autoencoders Sys-AI.
Artificial Intelligence Chapter 3 Neural Networks
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
Backpropagation.
Deep learning enhanced Markov State Models (MSMs)
Artificial Intelligence Chapter 3 Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Representation Learning with Deep Auto-Encoder Hanock Kwak, Byoung-Tak Zhang Department of Computer Science and Engineering, Seoul National University, hnkwak@bi.snu.ac.kr Department of Computer Science and Engineering, Seoul National University, btzhang@bi.snu.ac.kr Backgrounds Experimental Results Dimensionality reduction facilitates the classification, visualization, communication, and storage of high-dimensional data. Auto-encoder is a nonlinear generalization of PCA that uses an adaptive, multilayer encoder network to transform the high- dimensional data into a low-dimensional code. Gradient descent can be used for fine-tuning the weights neural networks. Reconstruction results Methods Interpolation on each layer Starting with random weights in the two networks, they can be trained together by minimizing the discrepancy between the original data and its reconstruction. The required gradients are obtained by using the chain rule to backpropagate error derivatives first through the decoder network and then through the encoder network. raw h1 h2 h3 raw h1 h2 h3 Loss curves for each layer h1 h2 h3 Reconstruction of noisy inputs Putting Bernoulli’s random noise to test robustness of auto- encoder. Discussion The manifold of digit data is flattened in deep hidden layers which is shown in the interpolation experiment results. Contractive penalty always helps an auto-encoder to perform better, and competes or improves upon the representations learned by ordinary auto-encoders. The higher representation of deep auto-encoder can eliminate minor noises in the input through forwarding operations of multilayer perceptrons. Here, the input is more abstracted on deeper hidden layer. References Vincent, Pascal, et al. "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion." The Journal of Machine Learning Research 11 (2010): 3371-3408. Y. Bengio, G. Mesnil, Y. Dauphin, S. Rifai, Better mixing via Deep Representations, ICML 2013 Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507 Rifai, Salah, et al. "Contractive auto-encoders: Explicit invariance during feature extraction." Proceedings of the 28th international conference on machine learning (ICML-11). 2011. Biointelligence Lab, Seoul National University | Seoul 151-744, Korea (http://bi.snu.ac.kr)