Overview of Neural Network Architecture Assignment Code

Slides:



Advertisements
Similar presentations
Neural Networks: Learning
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
NEURAL NETWORKS Backpropagation Algorithm
Neural networks Introduction Fitting neural networks
Lecture 5: CNN: Regularization
ImageNet Classification with Deep Convolutional Neural Networks
Lecture 14 – Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Artificial Neural Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Artificial Neural Networks
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Classification / Regression Neural Networks 2
Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Machine Learning Supervised Learning Classification and Regression
RNNs: An example applied to the prediction task
CSE 190 Caffe Tutorial.
CS 6501: 3D Reconstruction and Understanding Convolutional Neural Networks Connelly Barnes.
RADIAL BASIS FUNCTION NEURAL NETWORK DESIGN
The Gradient Descent Algorithm
COMP24111: Machine Learning and Optimisation
Deep Learning Libraries
Classification with Perceptrons Reading:
Sujay Yadawadkar, Virginia Tech
CAMELYON16 Challenge Matt Berseth, NLP Logix Jacksonville FL
First Steps With Deep Learning Course.
Introduction to CuDNN (CUDA Deep Neural Nets)
A brief introduction to neural network
Classification / Regression Neural Networks 2
Torch 02/27/2018 Hyeri Kim Good afternoon, everyone. I’m Hyeri. Today, I’m gonna talk about Torch.
Prof. Carolina Ruiz Department of Computer Science
RNNs: Going Beyond the SRN in Language Prediction
Deep Learning Packages
Lecture 11. MLP (III): Back-Propagation
MXNet Internals Cyrus M. Vahid, Principal Solutions Architect,
CS 4501: Introduction to Computer Vision Training Neural Networks II
CSC 578 Neural Networks and Deep Learning
Face Recognition with Neural Networks
network of simple neuron-like computing elements
Optimization for Fully Connected Neural Network for FPGA application
Neural Networks Geoff Hulten.
Backpropagation.
Image Compression Using An Adaptation of the ART Algorithm
Backpropagation Disclaimer: This PPT is modified based on
Artificial Neural Networks
Artificial Intelligence 10. Neural Networks
Backpropagation David Kauchak CS159 – Fall 2019.
Backpropagation and Neural Nets
Backpropagation.
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
Deep Learning Libraries
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Artificial Neural Network learning
Artificial Neural Networks / Spring 2002
Principles of Back-Propagation
Prof. Carolina Ruiz Department of Computer Science
Logistic Regression Geoff Hulten.
Tensorflow in Deep Learning
An introduction to neural network and machine learning
Machine Learning for Cyber
Presentation transcript:

Overview of Neural Network Architecture Assignment Code Geoff Hulten

PyTorch Deep learning platform – easy to use Install from: https://pytorch.org/get-started/locally/ CUDA enables GPU training – 100s of times faster Docs: https://pytorch.org/docs/stable/index.html https://pytorch.org/docs/stable/nn.html <- important

Define the Neural Network Parts import torch class SimpleBlinkNeuralNetwork(torch.nn.Module): def __init__(self, hiddenNodes = 20): super(SimpleBlinkNeuralNetwork, self).__init__() # Down sample the image to 12x12 self.avgPooling = torch.nn.AvgPool2d(kernel_size = 2, stride = 2) # Fully connected layer to all the down-sampled pixels to all the hidden nodes self.fullyConnectedOne = torch.nn.Sequential( torch.nn.Linear(12*12, hiddenNodes), torch.nn.Sigmoid() ) # Fully connected layer from the hidden layer to a single output node self.outputLayer = torch.nn.Sequential( torch.nn.Linear(hiddenNodes, 1),

Run Forward Pass def forward(self, x): # Apply the layers created at initialization time in order out = self.avgPooling(x) out = out.reshape(out.size(0), -1) out = self.fullyConnectedOne(out) out = self.outputLayer(out) return out

Load the Data into Tensors from PIL import Image import torchvision.transforms as transforms import torch # Load the images and then convert them into tensors (no normalization) xTrainImages = [ Image.open(path) for path in xTrainRaw ] xTrain = torch.stack([ transforms.ToTensor()(image) for image in xTrainImages ]) xTestImages = [ Image.open(path) for path in xTestRaw ] xTest = torch.stack([ transforms.ToTensor()(image) for image in xTestImages ]) yTrain = torch.Tensor([ [ yValue ] for yValue in yTrainRaw ]) yTest = torch.Tensor([ [ yValue ] for yValue in yTestRaw ])

Create the Model and Support # Create the model and set up: # the loss function to use (Mean Square Error) # the optimization method (Stochastic Gradient Descent) and the step size model = SimpleBlinkNeuralNetwork.SimpleBlinkNeuralNetwork(hiddenNodes = 5) lossFunction = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)

Fit the Model for i in range(500): # Do the forward pass yTrainPredicted = model(xTrain) # Compute the training set loss loss = lossFunction(yTrainPredicted, yTrain) print(i, loss.item()) # Reset the gradients in the network to zero optimizer.zero_grad() # Backprop the errors from the loss on this iteration loss.backward() # Do a weight update step optimizer.step()

Things to look into to succeed at this assignment: torch.nn.Conv2d torch.nn.ReLu torch.nn.MaxPool2d torch.nn.BatchNorm2d torch.nn.Dropout Tuning the number of filters/nodes, the optimizer, and the training iterations torchvision.transforms – Data augmentation (e.g. mirroring training data)