Download presentation
1
Lecture 3a Analysis of training of NN
2
Agenda Analysis of deep networks Variance analysis
Non-linear units Weight initialization Local Response Normalization (LRN) Batch Normalization
3
Understanding the difficulty of training convolutional networks
The key idea : debug training through monitoring of mean and variance of activation variables π¦ π (output of non-linear unit) mean and variance of gradients π πΈ π π¦ πβ1 , and π πΈ π π π . Reminder: variance of x: πππ π₯ =πΈπ₯π (π₯β π₯ ) 2 , We will compute scalar mean and variance for each layer, and then average over images in the test set. X. Glorot ,Y. Bengio, Understanding the difficulty of training deep feedforward neural networks
4
Understanding the difficulty of training convolutional networks
Activation graph for MLP: 4 x [fully connected layers + sigmoid] Mean and standard deviation of the activation (output of the sigmoid) during learning, for 4 hidden layers . The top hidden layer quickly saturates at 0 (slowing down all learning), but then slowly desaturates ~ epoch 100. Sigmoid is non-symmetric -> difficult to train
5
Understanding the non-linear function behavior
Letβs try MLP with symmetric non-linear functions: tanh & soft-sign Tanh: π π β π βπ π π + π βπ | Soft-sign : π π+|π| 98 % (markers only) and standard deviation (solid lines with markers)
6
MLP: Debugging the forward path
How we can use variance analysis to debug NN training? Letβ start with classical Multi Layer Perceptron (MLP) Forward propagation for FC layer: π¦ π =π( π=1 π π€ ππ β π₯ π ) where π₯=[π₯ π ] β layer inputs, π β number of inputs W β layer matrix π¦=[ π¦ π ] β layer outputs (hidden nodes) Assume that π is symmetric non-linear activation function. For ini tial analysis we will ignore non-linear unitπ and itβs derivative ( π is not saturate and fβ ~ 1).
7
MLP: Debugging the forward path
Assume that: all π₯ π are independent and have the variance πππ π , all π€ ππ are independent and have the variance πππ π . Then πππ π¦ = π ππ β πππ π€ β πππ(π₯) We want to keep the output π¦ at the same dynamic range as input π₯: π ππ β πππ π€ =1 ο¨ πππ π = 1 π ππ Xavier rule for weight initialization with uniform rand( ): π=π’ππ_ππππ β 3 π ππ ; π ππ X. Glorot ,Y. Bengio, Understanding the difficulty of training deep feedforward neural networks
8
MLP: Debugging the backward propagation
Backward propagation of gradients: π πΈ ππ₯ = π πΈ ππ¦ β π€ π ; π πΈ ππ€ = π πΈ ππ¦ βπ₯ Then πππ ππΈ ππ₯ = π ππ’π‘ βπππ π€ β πππ( ππΈ ππ¦ ) πππ ππΈ ππ€ = πππ π₯ βπππ( ππΈ ππ¦ ) We want to keep gradients π πΈ ππ₯ from vanishing and from exploding: π ππ’π‘ βπππ π€ =1 ο¨ πππ π = 1 π ππ’π‘ . Combining with formula from forward path: πππ π = 2 π ππ +π ππ’π‘ ; Xavier rule 2 for weight initialization with uniform rand( ): π=π’ππ_ππππ β 6 π ππ + π ππ’π‘ ; 6 π ππ + π ππ’π‘
9
Extension of gradient analysis for convolutional networks
Convolutional layer: Forward propagation: π π =π( π=1 π π ππ β π π ) where: π π - output feature map (Hβ x Wβ) π ππ - convolutional filter (K x K) π π - input feature map (H x W) π ππ β π π - convolution of input feature map X with filter W M β number of input features (each feature map H x W ) Backward propagation: ππΈ π π π = π=1 π ππΈ π π π β π ππ , ππΈ π π ππ = ππΈ π π π * π π Here : * is convolution.
10
Extension of gradient analysis for convolutional networks
Convolutional layer: Forward propagation: πππ π = π ππ β πππ π βπππ(π) π ππ = # input feature maps * k2 Backward propagation: πππ ππΈ ππ₯ = π ππ’π‘ β πππ π βπππ( ππΈ ππ¦ ) π πππ = # output feature maps * k2 For weight gradients: πππ ππΈ ππ€ ~ (π―βπΎ)β πππ π βπππ( ππΈ ππ¦ ) We can compensate (π―βπΎ) with layer learning rate
11
Local Contrast Normalization
Local Contrast Normalization - can be performed on the state of every layer, including the input Subtractive Local Contrast Normalization: Subtracts from every value in a feature a Gaussian-weighted average of its neighbors (high-pass filter) Divisive Local Contrast Normalization Divides every value in a layer by the standard deviation of its neighbors over space and over all feature maps Subtractive + Divisive LCN
12
Local Response Normalization Layer
LRN layer βdampsβ responses that are too large by normalization in a local neighborhood inside the feature map: π¦ π π₯,π¦ = π πβπ π₯,π¦ π+ πΌ π΅ π β π₯ β² =π₯βπ/2 π₯+π/2 π¦ β² =π¦βπ/2 π¦+π/2 π¦ πβ1 ( π₯ β² , π¦ β² ) 2 whereΒ : y lβ1 π₯,π¦ Β is the activity mapΒ prior to normalization, NΒ is the size of the region to use for normalization. - 1 is used to prevent numerical issues for small numbers.
13
Local Response Normalization Layer
Soft Max Inner Product LRN layer Pooling ReLUP Convolutional layer LRN layer Pooling ReLUP Convolutional layer Data Layer
14
Batch Normalization Layer
Layer which normalizes the output of conv. layer before non-linear layer: Whitening: normalize each element of feature map over mini-batch. All locations of the same feature map are normalized in the same way. Adaptive scale Ξ³ and shift Ξ² (per map) β learned parameters S. Ioffe, C. Szegedy Batch Normalization: Accelerating Deep Network Training , 2015
15
Batch Normalization Layer
Soft Max Inner Product Pooling ReLUP Batch Normalization layer Convolutional layer Pooling ReLUP Batch Normalization layer Convolutional layer Data Layer
16
Batch Normalization: training
Back-propagation for BN layer: Implemented in caffe:
17
Batch Normalization: inference
During inference we donβt have batch to normalize, so we use instead fixed mean and variance over all train set: π₯ = π₯βπΈ[π₯] πππ π₯ + π For testing during training we can use estimation of E[x] and Var[x]:
18
Batch Normalization: performance
Networks with batch normalization train much faster : Much high learning rate with fast exponential decay No need in LRN Baseline: caffe cifar_full VGG-16:caffe VGG_ILSVRC_16
19
Batch Normalization: Imagenet performance
Models: Goog;le Inception:(ILSVRC 2014) with the learning rate of BN-Baseline: Inception + Batch Normalization before each ReLU BN-x5: Inception + Batch Normalization w/o dropout and LRN. The initial learning rate was increased by 5x to BN-x30: Like BN-x5, but with the initial learning rate (30x of Inception). BN-x5-Sigmoid: Like BN-x5, but with sigmoid instead of ReLU
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.