Download presentation
Presentation is loading. Please wait.
Published byMagdalen Cobb Modified over 6 years ago
1
Data Mining, Neural Network and Genetic Programming
COMP422 Week 7 Convolutional Neural Network Yi Mei and Mengjie Zhang
2
Outline Why CNN? Automatic Feature Extraction Feature Maps
Number of Parameters in CNN CNN for Handwritten Digit Recognition
3
Traditional ANN What is the meaning of the hidden layers/nodes?
4
Convolutional Neural Network
Traditional fully connected architecture is not efficient in most cases Too many weights, most redundant Does not employ domain knowledge Require huge amount of training data Hard to interpret Can we improve the architecture by employing domain knowledge? In image processing, what domain knowledge can we use?
5
Image Classification Input image: 32 *32 pixels
Classification: face or non-face? Build a neural network for it How do you design the input layer?
6
Automatic Feature Extraction
Directly use the raw pixel values so that no need to manually design high-level features Use neural network to automatically extract high-level features It is done by convolution Convolution mask What is the mask for calculating the mean for this region?
7
Automatic Feature Extraction
Possible feature example The mean pixel value in the 3*3 area centered at pixel (8, 5), i.e at row 8 and column 5 The weighted sum pixel value in the 3*3 area centered at pixel (r, c) 4 5 6 7 1/9 1/9 1/9 x = 8 1/9 1/9 1/9 1/9 1/9 1/9 9 Input pixels Weight matrix c r
8
Automatic Feature Extraction
Parameters to set Size of the matrix (convolution mask) The weight values The region (centered location) How to set them? Whether there is a dark area around the left eye region? 3*3 weight matrix, 1/9 for all weights, left eye region Whether there is a horizontal edge around the left eyebrow region?
9
Automatic Feature Extraction
Applying a convolution mask (weight matrix) to a region generates a feature value The mean value of the left-eye region is dark -> there is a βleft eyeβ Apply a convolution mask to the entire image generates a feature map
10
Automatic Feature Extraction
A feature map can be seen as an enhanced (processed) image by applying some convolution mask to the original image What parameters to set for generating a feature map?
11
Number of Parameters in CNN
Any parameter missing here? c r
12
How many parameters are there for this feature map?
64 hidden nodes 16 * 16 256 input nodes Receptive field 5 * 5 weight matrix
13
How many unique parameters are there for this feature map?
64 hidden nodes 16 * 16 256 input nodes Receptive field 5 * 5 weight matrix
14
Number of Parameters in CNN
Calculate the size of feature map (output) from Input image size ππ π€ ππ Γππ π ππ Filter (Weight matrix) size ππ π€ ππππ‘ππ Γππ π ππππ‘ππ Number of pixels for each shift ππ π€ π βπππ‘ Γππ π π βπππ‘ Example: 16*16 input, 5*5 filter, 2*2 pixels per shift, what is the size of the feature map?
15
Multiple Feature Maps Different weight matrices and biases to extract different feature maps How many parameters in total? How many unique parameters in total? 8 * 8 feature map 64 hidden nodes 16 * 16 256 input nodes Feature map 1 β¦ Feature map 12
16
CNN for Handwritten Digit Recognition
Proposed by LeCun et al. in 1989
17
β¦ 4 * 4 16 hidden nodes 5 * 5 8 * 8 wm1 64 hidden nodes 16 * 16
256 input nodes Feature map 1 5 * 5 wm8 β¦ Feature map 12
18
β¦ 4 * 4 16 hidden nodes 5 * 5 8 * 8 wm1 64 hidden nodes 16 * 16
256 input nodes Feature map 1 5 * 5 wm8 β¦ Feature map 12
19
β¦ β¦ 4 * 4 16 hidden nodes 8 * 8 64 hidden nodes 16 * 16
256 input nodes Feature map 1 β¦ β¦ Feature map 12
20
CNN for Handwritten Digit Recognition
21
Subsampling Instead of aggregating the input values with weighted sum, we simply pick the maximal value Usually 2 x 2 mask Compress the image while keeping the key features c r
22
Design of CNN Number of hidden layers Type of each layer
Input, output, fully connected, feature map, subsampling, β¦ Configuration of fully connected layer Number of nodes Configuration of feature map layer Number of feature maps (weight matrices) Weight matrix size (3x3, 5x5, β¦) Shift size (every one pixel, every two pixels, β¦) Connection Configuration of subsampling layer Pool size (2x2, β¦)
23
Complete CNN Example
24
Weight Training in CNN BP algorithm Speed up techniques
Feedforward: each node forward to its successors (not all nodes in the next layer) Back error propagation: each node back to its predecessors (not all the nodes in the previous layer) Speed up techniques Momentum Fan-in factor β¦
25
Hinton Diagram Visualise the weight matrix
Size of square ο valueβs magnitude Color (black/white) ο valueβs sign (positive/negative)
26
Summary CNN for image processing
Use raw pixels and automated feature extraction Domain knowledge about neighbourhood of pixels: shared weights Number of parameters in total vs Number of unique parameters
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.