Download presentation
Presentation is loading. Please wait.
Published byŌΣολομών Ζέρβας Modified over 6 years ago
1
Deep Belief Networks Psychology 209 February 22, 2013
2
Why a Deep Network? Why not just one layer of hidden units?
Fails to capture constraints on the problem. For many problems, requires exponential hardware. Two examples: Parity Letters x positions
5
But, says Le Cun…
8
Stacked Auto-Encoders
To capture intermediate level structure, one might use stacked auto-encoders. But, training can be very slow as more layers are added. Backprop slows exponentially in the number of layers
9
The deep belief network vision (Hinton)
Consider some sense data D We imagine our goal is to understand what generated it We use a generative model Search for the most probable ‘cause’ C of the data The one where p(D|C)p(C) is greatest How do we find C? Cause Data
10
One and Two Layer Belief Networks
How should we train such networks?
14
Stacking RBM’s ‘Greedy’ layerwise learning of RBM’s
First learn H0 based on input. Then learn H1 based on H0 Etc… Then ‘fine tune’ says Hinton
19
Test Procedure Generation: Recognition Clamp a digit identity
Do ‘alternating Gibbs sampling’ from random starting image; send state back down to see what it is like Recognition Clamp input pattern on ‘retina’ Feed up, perform alternating Gibbs sampling at top levels. Check out the movie:
20
Close Calls (49) and Errors (125) out of 10,000 Test Digits
21
That’s great says Yann LeCun…
But it doesn’t always work so well We need to reduce the Energy (increase the goodness) of the sample data (Y) and decrease the goodness of everything else (Y’) But there is too much ‘everything else’. Y’
22
LeCun’s view of Stacked Encoder Networks
Think of each layer as an encoder-decoder pair learning to minimize its own ‘reconstruction error’ ~ ‘maximize the probability of the training data’ Starting from this, can we make the encoder/decoder more powerful and also more constrained than an RBM?
23
Two New Ideas and One Old
Force the representation to be sparse Can’t represent too many possibilities, so makes most of the input bad automatically! Just pull down the Energy of the samples and the rest will take care of itself! Let the Encoder be as smart as you want it to be. Why just use one feed-forward layer on the encoder side of each layer? Why not use the full potential of a multi-layer network? Force invariance by re-using the same weights at many positions across lower layers
26
IMAGENET Large Scale Visual Recognition Challenge 2012
Tasks: Classification Classification with Localization Training data: 1.2 M images from 1,000 classes. English setter Granny Smith Ladle Validation set: 50,000 images not in training set Test set: 100,000 images not in Validation or training set. An item is scored as correct if the correct answer is one of the network’s top 5 guesses
27
The Results Classification
SuperVision Model: Our model is a large, deep convolutional neural network trained on raw RGB pixel values. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three globally-connected layers with a final 1000-way softmax. It was trained on two NVIDIA GPUs for about a week. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce overfitting in the globally-connected layers we employed hidden-unit "dropout", a recently-developed regularization method that proved to be very effective. Dropout: For each presentation of an item during learning force a fraction of the hidden units chosen at random to have activation value zero. Classification Team Error Rate SuperVision Runner-Up .262 Localization Team Error Rate SuperVision Runner-Up .500 SuperVision Team: Alex Krizhevsky Ilya Sutskever Geoffrey Hinton
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.