Download presentation
Presentation is loading. Please wait.
1
Artificial Neural Networks
Lab demonstration (2)
2
Python Modules A module is a file containing Python definitions and statements intended for use in other Python programs. There are many Python modules that come with Python as part of the standard library. Once we import the module, we can use things that are defined inside.
3
To use elements of a module
Import the module Use the dot to refer to the element of the module
4
Example: The turtle module
Source:
5
What modules are available in Python?
A list of modules that are part of the standard library is available in Python documentation at: modindex.html
6
In your file “network.py”
7
The random module Example applications in which we need to generate random numbers: To play a game of chance where the computer needs to throw some dice, pick a number, or flip a coin, To shuffle a deck of playing cards randomly, To randomly allow a new enemy spaceship to appear and shoot at you, For encrypting your banking session on the Internet.
8
The random module
9
The numpy module Used to create multidimensional arrays
In numpy, dimensions of an array are called axes The number of axes is called the rank of the array Example: What are the rank and axes of the following numpy array?
10
The numpy module Used to create multidimensional arrays
In numpy, dimensions of an array are called axes The number of axes is called the rank of the array Example: What are the rank and axes of the following numpy array? Rank = 1 1 axis of length 3
11
How to create a numpy array?
12
Create a multidimensional numpy array
13
Initializing the content of an array
14
Lab Exercise Create a 1-dimensional array (1 axis) containing five ones Create a 2-dimensional array (2 axes) containing 4 x 5 zeros Create a 3-dimensional array (3 axes) containing 4x3x2 ones
15
Lab Exercise: solution
Create a 1-dimensional array (1 axis) containing five ones Create a 2-dimensional array (2 axes) containing 4 x 5 zeros Create a 3-dimensional array (3 axes) containing 4x3x2 ones
16
Initializing a random array from normal distribution
17
Initializing multiple arrays from a normal distribution
4 arrays: each one is 3x2
18
Exercise: Generate 3 arrays of random numbers
The first array is 3 x 1 The second array is 5 x 1 The third array is 2 x 1
19
Solution
20
Exercise: Given a list of layers for a neural network, generate random bias vectors for each layer
Example for this figure (from mid-term exam), the bias vectors can be: 𝑏 𝑏 = 𝑏 1 2 =
21
Solution
22
Specifying a neural network
Input: a vector of number of neurons in each layer The first number in the input vector contains the number of input variables. Ex: [3, 2, 1] ============ >
23
Initializing biases
24
Initializing weights Sizes = [3,2,1] The first weight array is 2x3
𝑤 𝑤 𝑤 𝑤 𝑤 𝑤 23 1 The second weight array is 1x2 𝑤 𝑤 12 2
25
Initializing weight arrays
26
Exercise: Create a Neural Network Class
Create an __init__ function for the network class Initialize self.biases Initialize self.weights
27
Getting code and data git clone
28
For Python 3.4 we need to make some changes to the mnist_loader
Open the file mnist_loader.py Change cPickle to picke on lines 13 and 43 On line 43: change the call to training_data, validation_data, test_data = pickle.load(f, encoding='latin1')
29
For Python 3.4 we need to make some changes to the mnist_loader
Open the file mnist_loader.py Wrap the zip() calls with list() calls
30
In file network.py //Add this line
31
In file network.py Find all xrange and replace it with range
32
The MNIST Data set A large number of scanned images of handwritten digits Each image is 28 x 28 = 784 pixels We need to create a neural network that accepts 784 inputs values [X1, …. X784]
33
Load data and create network
34
Compatibility with Python 3.4
35
Output …..
36
What is this output? Recall the algorithm of Least Mean Square:
Calculates error based on 1 input pattern x(n) Updates weights based on 1 input pattern x(n)
37
Backpropagation Algorithm
Start with randomly chosen weights [𝒘 𝒋𝒌 𝒍 ] While error is unsatisfactory: for each input pattern x: feedforward: for each l = 1, 2,…, L compute 𝒛 𝒌 𝒍 𝒂𝒏𝒅 𝒂 𝒌 𝒍 Compute the error at output layer: 𝛿 𝑘 𝐿 = 𝑑 𝑘 𝐿 − 𝑎 𝑘 𝐿 𝜎′( 𝑧 𝑘 𝐿 ) Backpropagate the error: for l = L-1, L-2, … 2 compute 𝛿 𝑘 𝑙 = 𝛿 𝑘 𝑙+1 𝑤 𝑘𝑗 𝑙+1 𝜎′(𝑧 𝑗 𝑙 ) Calculate the gradients: 𝜕𝐸 𝜕 𝑤 𝑘𝑗 𝑙 = 𝜹 𝒋 𝒍 𝑎 𝑘 𝑙−1 and 𝜕𝐸 𝜕 𝑏 𝑗 𝑙 = 𝜹 𝒋 𝒍 end for end while Updates weights based on 1 input pattern x(n) © Mai Elshehaly
38
Three strategies to update the weights:
Update after the network sees every single input pattern Update after the network sees a mini_batch of input patterns Update after the network sees the entire batch of input patterns The difference between the three strategies will be discussed in the lecture.
39
The mini_batch strategy: Ex.: mini_batch_size = 5
40
The mini_batch strategy:
1. Input one batch to the network: 2. Adjust weights 3. Move to the next batch 4. Repeat until no more batches in the training data set This is one epoch
41
To increase the accuracy: التكرار يعلم الشطار
Repeat the previous process for a number of epochs Don’t input the mini batches in the same order (random.shuffle) With each new epoch, you can see that the accuracy increases Correctly classified samples Total number of test samples
42
To see the effect of parameters on accuracy
Try passing different values for epochs, mini_batch_size, and eta
43
How to implement this shuffling and batching strategy?
Example: Say you have a deck of 30 cards with labels 1… 30 You want to take 10 cards in each draw You want to keep drawing until no more cards You want to shuffle the cards then repeat 8 times
44
Shuffling cards
45
Shuffle for 8 epochs
46
Explore by 10 cards in each epoch:
47
Exercise: Write a function sum_mini_batches(training_data, epochs, mini_batch_size) that does the following for epochs times: Shuffles the cards in training_data Creates a number of mini batches each of which is of size mini_batch_size Prints the sum of the numbers in each mini batch
48
Lab Demo: Second Round
49
Review items Numpy’s dot() function The weights and biases of ANN
Zip() function Negative indices in Python Matrix shape The Backpropagation pseudocode
50
dot() function
51
zip() function Example: initializing weights and biases
52
zip() function Example: to iterate over layers of weights and biases
53
Exercise Reuse the mini_batches code that we wrote earlier to generate inputs. Iterate over layers of weights and biases to calculate the z values of different layers. Assume that actual output = net input (a=z) for simplicity. Print z at each iteration.
54
Negative indices in Python: Try the following
55
Solution
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.