with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017

Slides:



Advertisements
Similar presentations
Greedy Layer-Wise Training of Deep Networks
Advertisements

Deep Learning Bing-Chen Tsai 1/21.
CS590M 2008 Fall: Paper Presentation
Advanced topics.
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
POSTER TEMPLATE BY: Multi-Sensor Health Diagnosis Using Deep Belief Network Based State Classification Prasanna Tamilselvan.
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
A shallow introduction to Deep Learning
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Deep learning Tsai bing-chen 10/22.
Neural Networks William Cohen [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ]
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
CSC321 Lecture 27 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Xintao Wu University of Arkansas Introduction to Deep Learning 1.
Deep Learning Primer Swadhin Pradhan Reading Group Presentation 03/30/2016, UT Austin.
Deep Learning a.k.o. Neural Network.
Vision-inspired classification
Big data classification using neural network
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Convolutional Sequence to Sequence Learning
Handwritten Digit Recognition Using Stacked Autoencoders
Deep Learning Amin Sobhani.
an introduction to: Deep Learning
Energy models and Deep Belief Networks
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
DeepCount Mark Lenson.
Article Review Todd Hricik.
Deep Learning with Symbols
Matt Gormley Lecture 16 October 24, 2016
Restricted Boltzmann Machines for Classification
Intelligent Information System Lab
Neural networks (3) Regularization Autoencoder
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Supervised Training of Deep Networks
Deep learning and applications to Natural language processing
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Convolution Neural Networks
Deep Learning Qing LU, Siyuan CAO.
Deep Belief Networks Psychology 209 February 22, 2013.
Let us start with a review/preview
Unsupervised Learning and Autoencoders
Deep Learning Workshop
Restricted Boltzman Machines
Department of Electrical and Computer Engineering
Deep Learning Packages
Limitations of Traditional Deep Network Architectures
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Recurrent Neural Networks
Deep learning Introduction Classes of Deep Learning Networks
Deep Architectures for Artificial Intelligence
Deep Belief Nets and Ising Model-Based Network Construction
Gaurav Aggarwal, Mark Shaw, Christian Wolf
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Basics of Deep Learning No Math Required
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Creating Data Representations
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
Neural networks (3) Regularization Autoencoder
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
Attention for translation
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Week 7 Presentation Ngoc Ta Aidean Sharghi
Presentation transcript:

with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017 Deep Belief Networks with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017 11/11/2018 Deep Learning Workshop

Feature detectors

What is this unit doing?

Hidden layer units become self-organised feature detectors 1 5 10 15 20 25 … … 1 strong +ve weight low/zero weight 63

What does this unit detect? 1 5 10 15 20 25 … … 1 strong +ve weight low/zero weight 63

What does this unit detect? 1 5 10 15 20 25 … … 1 strong +ve weight low/zero weight it will send strong signal for a horizontal line in the top row, ignoring everywhere else 63

And .. What is this unit doing ?

What does this unit detect? 1 5 10 15 20 25 … … 1 strong +ve weight low/zero weight 63

What does this unit detect? 1 5 10 15 20 25 … … 1 strong +ve weight low/zero weight Strong signal for a dark area in the top left corner 63

What features might you expect a good NN to learn, when trained with data like this?

vertical lines 1 63

Horizontal lines 1 63

Small circles 1 63

Small circles But what about position, scale invariance? 1 But what about position, scale invariance? the feature detectors are tied to specific parts of the image 63

successive layers can learn higher-level features … Nodes of Layer 1: detect horizontal dark line in specific positions etc … Nodes of Layer 2: detect the presence of : -a horizontal dark line, -a vertical dark line, or -a dark circular area in any position etc … v

successive layers can learn higher-level features … Nodes of Layer 1: detect horizontal dark line in specific positions etc … Nodes of Layer 2: detect the presence of : -a horizontal dark line, -a vertical dark line, or -a dark circular area in any position etc … v Nodes of Output Layer: combine features to to indicate image is a 9

So: multiple layers make sense Many-layer neural network architectures have the ability to learn the underlying feature layers and generalise well to new test cases…

But how can we train BP NNs so that we overcome the vanishing gradient problem

The new way to train multi-layer NNs… Develop h1 features using an autoencoder

Use an Autoencoder BP autoencoder RBM autoencoder

Stacked Auto-Encoders Bengio (2007) – After Deep Belief Networks (2006) Stack many (sparse) auto-encoders in succession and train them using greedy layer-wise training Drop the decode output layer each time

Deep Belief Networks

The new way to train multi-layer NNs… Develop h1 features Develop h2 features

The new way to train multi-layer NNs… Develop h1 features Develop h2 features Develop h3 features

The new way to train multi-layer NNs… Develop h1 features Develop h2 features Develop h3 features Develop h4 features

The new way to train multi-layer NNs… Develop h1 features Final layer trained to predict class based on outputs from previous layers Develop h2 features Develop h3 features Develop h4 features Add a final supervised layer

Wide Variety of DBNs Can be used for any problem, makes no assumption about inputs Variations in architectures Different kinds of autoencoders Supervised learning of just top layer(s) Supervised learning of all layers

TUTORIAL 8 Develop and train a a DBN using Tensorflow (Python code) Rbm.py

Restricted Boltzmann Machine (RBM) Does not allow intra-layer connections (bipartite architecture) Learning both recognition model and generative model

Deep Learning Workshop RBM 11/11/2018 Deep Learning Workshop