How it Works: Convolutional Neural Networks

Slides:



Advertisements
Similar presentations
ImageNet Classification with Deep Convolutional Neural Networks
Advertisements

Tiled Convolutional Neural Networks TICA Speedup Results on the CIFAR-10 dataset Motivation Pretraining with Topographic ICA References [1] Y. LeCun, L.
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Deep Learning.
Overview of Back Propagation Algorithm
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
ECE 6504: Deep Learning for Perception Dhruv Batra Virginia Tech Topics: –(Finish) Backprop –Convolutional Neural Nets.
Deep Convolutional Nets
Object Recognition Tutorial Beatrice van Eden - Part time PhD Student at the University of the Witwatersrand. - Fulltime employee of the Council for Scientific.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Convolutional Neural Network
ConvNets for Image Classification
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Lecture 4b Data augmentation for CNN training
Yann LeCun Other Methods and Applications of Deep Learning Yann Le Cun The Courant Institute of Mathematical Sciences New York University
Deep Reinforcement Learning
Big data classification using neural network
Demo.
Convolutional Neural Network
The Relationship between Deep Learning and Brain Function
CS 6501: 3D Reconstruction and Understanding Convolutional Neural Networks Connelly Barnes.
Summary of “Efficient Deep Learning for Stereo Matching”
Deep Learning Amin Sobhani.
CSE 190 Neural Networks: How to train a network to look and see
ECE 5424: Introduction to Machine Learning
Computer Science and Engineering, Seoul National University
Mastering the game of Go with deep neural network and tree search
Principal Solutions AWS Deep Learning
Lecture 24: Convolutional neural networks
Lecture 25: Backprop and convnets
ECE 6504 Deep Learning for Perception
Lecture 5 Smaller Network: CNN
Neural Networks 2 CS446 Machine Learning.
Convolution Neural Networks
Convolutional Networks
CS6890 Deep Learning Weizhen Cai
Machine Learning: The Connectionist
Non-linear classifiers Neural networks
Introduction to Convolutional Neural Network (CNN/ConvNET)-insights from amateur George (Tian Zhou)
State-of-the-art face recognition systems
convolutional neural networkS
Human-level control through deep reinforcement learning
Computer Vision James Hays
CNNs and compressive sensing Theoretical analysis
Introduction to Neural Networks
Image Classification.
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
convolutional neural networkS
CS 4501: Introduction to Computer Vision Training Neural Networks II
Convolutional Neural Networks
cs540 - Fall 2016 (Shavlik©), Lecture 20, Week 11
Very Deep Convolutional Networks for Large-Scale Image Recognition
Smart Robots, Drones, IoT
Deep Learning.
Lecture: Deep Convolutional Neural Networks
Analysis of Trained CNN (Receptive Field & Weights of Network)
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Mihir Patel and Nikhil Sardana
实习生汇报 ——北邮 张安迪.
Martin Schrimpf & Jon Gauthier MIT BCS Peer Lectures
Backpropagation and Neural Nets
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
CSC 578 Neural Networks and Deep Learning
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
CS295: Modern Systems: Application Case Study Neural Network Accelerator Sang-Woo Jun Spring 2019 Many slides adapted from Hyoukjun Kwon‘s Gatech “Designing.
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Image recognition.
Debasis Bhattacharya, JD, DBA University of Hawaii Maui College
Principles of Back-Propagation
Presentation transcript:

How it Works: Convolutional Neural Networks

Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng

Playing Atari with Deep Reinforcement Learning Playing Atari with Deep Reinforcement Learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller

Robot Learning ManipulationAction Plans by “Watching” Unconstrained Videos from the World Wide Web. Yezhou Yang, Cornelia Fermuller, Yiannis Aloimonos

A toy ConvNet: X’s and O’s Says whether a picture is of an X or an O A two-dimensional array of pixels CNN X or O

For example CNN X CNN O

Trickier cases CNN X translation scaling rotation weight CNN O

Deciding is hard ? =

What computers see ? =

What computers see

Computers are literal = x

ConvNets match pieces of the image = = =

Features match pieces of the image

Filtering: The math behind the match

Filtering: The math behind the match Line up the feature and the image patch. Multiply each image pixel by the corresponding feature pixel. Add them up. Divide by the total number of pixels in the feature.

Filtering: The math behind the match 1 x 1 = 1

Filtering: The math behind the match 1 x 1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match 1 x 1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match -1 x -1 = 1

Filtering: The math behind the match 1 x 1 = 1

Filtering: The math behind the match 1+1+1+1+1+1+1+1+1 9 =1

Filtering: The math behind the match 1 x 1 = 1

Filtering: The math behind the match -1 x 1 = -1

Filtering: The math behind the match

Filtering: The math behind the match 1+1−1+1+1+1−1+1+1 9 =.55 .55

Convolution: Trying every possible match

Convolution: Trying every possible match =

= = =

Convolution layer One image becomes a stack of filtered images

Convolution layer One image becomes a stack of filtered images

Pooling: Shrinking the image stack Pick a window size (usually 2 or 3). Pick a stride (usually 2). Walk your window across your filtered images. From each window, take the maximum value.

Pooling maximum

Pooling maximum

Pooling maximum

Pooling maximum

Pooling maximum

Pooling max pooling

Pooling layer A stack of images becomes a stack of smaller images.

Normalization Keep the math from breaking by tweaking each of the values just a bit. Change everything negative to zero.

Rectified Linear Units (ReLUs)

Rectified Linear Units (ReLUs)

Rectified Linear Units (ReLUs)

Rectified Linear Units (ReLUs)

ReLU layer A stack of images becomes a stack of images with no negative values.

Layers get stacked The output of one becomes the input of the next. Convolution ReLU Pooling

Deep stacking Layers can be repeated several (or many) times. Convolution ReLU Pooling

Fully connected layer Every value gets a vote

X O Fully connected layer Vote depends on how strongly a value predicts X or O X O

X O Fully connected layer Vote depends on how strongly a value predicts X or O X O

Fully connected layer Future values vote on X or O X O

Fully connected layer Future values vote on X or O X O

Fully connected layer Future values vote on X or O X .92 O

Fully connected layer Future values vote on X or O X .92 O

Fully connected layer Future values vote on X or O X .92 O .51

Fully connected layer Future values vote on X or O X .92 O .51

X O Fully connected layer A list of feature values becomes a list of votes. X O

Fully connected layer These can also be stacked. X O

Putting it all together A set of pixels becomes a set of votes. .92 X O connected Fully connected Fully Convolution ReLU Convolution ReLU Pooling Convolution ReLU Pooling .51

Learning Q: Where do all the magic numbers come from? Features in convolutional layers Voting weights in fully connected layers A: Backpropagation

X O Backprop Error = right answer – actual answer .92 .51 connected Convolution ReLU Pooling connected Fully X O .51

Backprop .92 Convolution ReLU Pooling connected Fully X O .51

Backprop .92 Convolution ReLU Pooling connected Fully X O .51

Backprop .92 Convolution ReLU Pooling connected Fully X O .51

Backprop .92 Convolution ReLU Pooling connected Fully X O .51

Backprop .92 Convolution ReLU Pooling connected Fully X O .51

Gradient descent For each feature pixel and voting weight, adjust it up and down a bit and see how the error changes. error weight

Gradient descent For each feature pixel and voting weight, adjust it up and down a bit and see how the error changes. error weight

Hyperparameters (knobs) Convolution Number of features Size of features Pooling Window size Window stride Fully Connected Number of neurons

Architecture How many of each type of layer? In what order?

Not just images Any 2D (or 3D) data. Things closer together are more closely related than things far away.

Images Columns of pixels Rows of pixels

Sound Time steps Intensity in each frequency band

Text Position in sentence Words in dictionary

Limitations ConvNets only capture local “spatial” patterns in data. If the data can’t be made to look like an image, ConvNets are less useful.

Customer data Name, age, address, email, purchases, browsing activity,… Customer data Customers

Rule of thumb If your data is just as useful after swapping any of your columns with each other, then you can’t use Convolutional Neural Networks.

In a nutshell ConvNets are great at finding patterns and using them to classify images.

Also check out Deep Learning Demystified Notes from Stanford CS 231 course (Justin Johnson and Andrej Karpathy) The writings of Christopher Olah

Some ConvNet/DNN toolkits Caffe (Berkeley Vision and Learning Center) CNTK (Microsoft) Deeplearning4j (Skymind) TensorFlow (Google) Theano (University of Montreal + broad community) Torch (Ronan Collobert) Many others

Thanks for listening! Connect with me online: Brandon Rohrer on LinkedIn @_brohrer_ on Twitter #HowItWorks #ConvNet brohrer@microsoft.com