Download presentation
Presentation is loading. Please wait.
1
Deep Learning Workshop
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017 09/11/2018 Deep Learning Workshop
2
Your Lecturers and Guides
Daniel L. Silver Grew up in the Annapolis Valley, NS BSc Acadia, CIM SMU, Ph.D. Western Software developer / Technical Architect from 1979 to 1992 Machine Learning and Data Mining since 1993 Professor/Director Jodrey School of CS, Acadia University since 2000 Director of the Acadia Institute for Data Analytics since 2014 Research area: Inductive Transfer and Lifelong Machine Learning 09/11/2018 Deep Learning Workshop
3
Your Lecturers and Guides
Christian Frey Grew up in Vermont, USA BBA from Acadia in 2016, major in Computer Science Business/Data Analyst at AIDA for the past 10 months Website administrator for casenet.ca 09/11/2018 Deep Learning Workshop
4
Deep Learning Workshop
Agenda Day 1 10:00 – Regression and BP ANNs (Python OR/XOR example) 10:30 - Generalization and bias/variance tradeoff (Python BP example of train/validation graphs) 11:00 – Design, training and evaluation of BP NN (Example using letter classification) 11:30 – Workshop - basic BP code 12:00 - Limitations of traditional deep network architectures (Example software) <LUNCH> 01:30 – Unsupervised Learning and BP AutoEncoders (Autoencoder reconstructs letters) 02:00 – RBMs (Python RBM code reconstructs letters) 02:30 – Workshop - Autoencoders and RBMs 03:00 – Intro to TensorFlow BP and MNIST dataset (Example code)) 03:30 – Workshop through the TF and MNIST BP tutorial, review homework Homework: Work through the Core and Contrib TF tutorials 09/11/2018 Deep Learning Workshop
5
Deep Learning Workshop
Agenda Day 2 09:30 - For those students needing head start 10:00 - Deep Belief Networks (Example TF DBN with MNIST data) 10:30 – Workshop - DBNs 11:00 - Intro to Convolution Neural Networks 11:30 - Intro to Keras and Conv2D 12:00 - Developing models for MNIST using TF Conv2D (Example code provided) 12:30 – Workshop - CNNs <LUNCH> 01:30 – Deep Learning Frameworks and Parallel Processing 02:00 – Intro to Recurrent NNs and LSTM networks (Example code provided) 02:30 – Workshop - RNNs 03:00 – Introduction to Assigned problem (TBA) 03:30 – Workshop – Assignment Homework: Comparison of DBN and CNN 09/11/2018 Deep Learning Workshop
6
Deep Learning Overview
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017 09/11/2018 Deep Learning Workshop
7
Deep Learning Overview
Developing artificial neural networks with many layers Versus shallow nets with just a couple of layers Multiple layers work to build an improved feature space Lowest hidden layer learns 1st order features (e.g. edges…) Upper hidden layers learn higher order features (comb. of lower features) Some models learn hidden node representations in an unsupervised manner and discover general features of the input space (DBN) Some models learn features in a supervised manner based on architected networks (CNN) Final layer of transformed features flow into final supervised layer(s) and trained as one large network 09/11/2018 Deep Learning Workshop
8
Deep Learning Architectures
A deep learning network can learn multiple levels of features from unlabeled data Simulating human sensory modality Unsupervised Feature Learning and Deep Learning – Andrew Ng, Stanford University
9
Deep Learning Workshop
Why Deep Learning Biological Plausibility – e.g. Visual Cortex Hastad proof - Problems which can be represented with a polynomial number of nodes with k layers, may require an exponential number of nodes with k-1 layers (e.g. parity) Highly varying functions can be efficiently represented with deep architectures - less weights to update than a less efficient shallow representation Sub-features created in deep architecture can potentially be shared between multiple tasks – transfer and lifelong machine learning 09/11/2018 Deep Learning Workshop
10
History of Deep Learning Networks
Fukushima – Neo-Cognitron Rumelhart et al. backpropagation networks 1989 – LeCun - Convolutional Neural Nets for Image, speech, etc. 1990s - Interest subsides as other models are introduced – SVMs, Graphical models, etc. – each their turn... Deep Belief Networks (Hinton) and Stacked Autoencoders (Bengio) – Unsupervised pre-training followed by supervised learning Initial successes with supervised approaches which overcome vanishing gradient and are more general applicable 2013 – Schmidhuber - Deep recurrent neural networks (LSTMs, GRUs) 09/11/2018 Deep Learning Workshop
11
Deep Learning Workshop
References: g+andrew+ng 09/11/2018 Deep Learning Workshop
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.