Download presentation
Presentation is loading. Please wait.
1
First Steps With Deep Learning Course
2
Deep learning frameworks
Modern tools make it easy to implement neural networks Often used components Linear, convolution, recurrent layers etc. Many frameworks available: Torch (2002), Theano (2011), Caffe (2014), TensorFlow (2015), PyTorch (2016)
3
Pytorch Fast tensor computation (like numpy) with strong GPU support
Deep learning research platform that provides maximum flexibility and speed Dynamic graphs and automatic differentiation
4
Developers
5
Outlines Basic concepts Write a model Classification of MNIST dataset
6
Basic Concepts torch.Tensor - similar to numpy.array
autograd.Variable - wraps a Tensor and enables auto differentiation autograd.Function - operate on Variables, implements forward and backward nn.Parameter - contain Parameters and define functions on input Variables nn.Module - contain Parameters and define functions on input Variables
7
Basic Concepts
8
Basic Concepts import torch x = torch.rand(10, 5) y = torch.rand(10,5) z = x * y
9
Basic Concepts import torch x = torch.ones(5,5) b = x.numpy() import torch x = torch.rand(10, 5) y = torch.rand(10,5) z = x * y
10
Basic Concepts import torch x = torch.ones(5,5) b = x.numpy() import torch x = torch.rand(10, 5) y = torch.rand(10,5) z = x * y import torch import numpy as np a = np.ones(5) x = torch.from_numpy(a)
11
Basic Concepts import torch x= torch.rand(100,100) x = x.cuda()
12
Gradients Variables wrap a tensor and saves a history
Used for automatic differentiation import torch from torch.autograd.variable import Variable x = Variable(torch.ones((2, 2)), requires_grad=True) y = x + 2 z = y * y * 3 out = z.mean() out.backward() print(x.grad)
13
Network definition class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) self.relu = torch.nn.ReLU(inplace=True) def forward(self, x): h_relu = self.linear1(x) h_relu = self.relu(h_relu) y_pred = self.linear2(h_relu) return y_pred
14
Optimize the Network N, D_in, H, D_out = 64, 1000, 100, 10 model = TwoLayerNet(D_in, H, D_out) loss_fn = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step()
15
Classifying Handwritten Digits
Input Image Conv 5x5, 1/10 MaxPool 2x2 + RELU Conv 5x5, 10/20 MaxPool 2x2 + RELU Conv 5x5, 10/20 Dropout , p = 0.5 Linear, 320/50 Linear, 50/10 Softmax
16
Thank you for your attention!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.