Download presentation
Presentation is loading. Please wait.
Published byTobias Gregory Modified over 9 years ago
1
Artificial Neural Networks
2
The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence on the structure and inner workings of the brain?
3
The Brain The human brain consists of: Approximately 10 billion neurons …and 60 trillion connections The brain is a highly complex, nonlinear, parallel information-processing system By firing neurons simultaneously, the brain performs faster than the fastest computers in existence today
4
The human brain consists of: Approximately 10 billion neurons …and 60 trillion connections ( synapses )
5
An individual neuron has a very simple structure Cell body is called a soma Small connective fibers are called dendrites Single long fibers are called axons An army of such elements constitutes tremendous processing power
6
Artificial Neural Networks An artificial neural network consists of a number of very simple processors called neurons Neurons are connected by weighted links The links pass signals from one neuron to another based on predefined thresholds
7
Artificial Neural Networks An individual neuron (McCulloch & Pitts, 1943): Computes the weighted sum of the input signals Compares the result with a threshold value, If the net input is less than the threshold, the neuron output is –1 (or 0) Otherwise, the neuron becomes activated and its output is +1
8
Artificial Neural Networks X = x 1 w 1 + x 2 w 2 +... + x n w n threshold
9
Activation Functions Individual neurons adhere to an activation function, which determines whether they propagate their signal (i.e. activate ) or not: Sign Function
10
Activation Functions
11
The step, sign, and sigmoid activation functions are also often called hard limit functions We use such functions in decision-making neural networks Support classification and other pattern recognition tasks
12
Perceptrons Can an individual neuron learn? In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a single-node neural network Rosenblatt’s perceptron model consists of a single neuron with adjustable synaptic weights, followed by a hard limiter
13
Perceptrons X = x 1 w 1 + x 2 w 2 Y = Y step
14
Perceptrons A perceptron: Classifies inputs x 1, x 2,..., x n into one of two distinct classes A 1 and A 2 Forms a linearly separable function defined by:
15
Perceptrons Perceptron with three inputs x 1, x 2, and x 3 classifies its inputs into two distinct sets A 1 and A 2
16
Perceptrons How does a perceptron learn? A perceptron has initial (often random) weights typically in the range [-0.5, 0.5] Apply an established training dataset Calculate the error as expected output minus actual output : error e = Y expected – Y actual Adjust the weights to reduce the error
17
Perceptrons How do we adjust a perceptron’s weights to produce Y expected ? If e is positive, we need to increase Y actual (and vice versa) Use this formula:, where and α is the learning rate (between 0 and 1) e is the calculated error
18
Perceptron Example – AND Train a perceptron to recognize logical AND Use threshold Θ = 0.2 and learning rate α = 0.1
19
Perceptron Example – AND Train a perceptron to recognize logical AND Use threshold Θ = 0.2 and learning rate α = 0.1
20
Perceptron Example – AND Repeat until convergence i.e. final weights do not change and no error Use threshold Θ = 0.2 and learning rate α = 0.1
21
Perceptron Example – AND Two-dimensional plot of logical AND operation: A single perceptron can be trained to recognize any linear separable function Can we train a perceptron to recognize logical OR? How about logical exclusive-OR (i.e. XOR)?
22
Perceptron – OR and XOR Two-dimensional plots of logical OR and XOR:
23
Perceptron Coding Exercise Write a code to: Calculate the error at each step Modify weights, if necessary i.e. if error is non-zero Loop until all error values are zero for a full epoch Modify your code to learn to recognize the logical OR operation Try to recognize the XOR operation....
24
Multilayer neural networks consist of: An input layer of source neurons One or more hidden layers of computational neurons An output layer of more computational neurons Input signals are propagated in a layer-by-layer feedforward manner Multilayer Neural Networks
25
I n p u t S i g n a l s O u t p u t S i g n a l s
26
Multilayer Neural Networks I n p u t S i g n a l s p u t S i g n a l s O u t p u t S i g n a l s
27
Multilayer Neural Networks X INPUT = x 1 X H = x 1 w 11 + x 2 w 21 +... + x i w i1 +... + x n w n1 X OUTPUT = y H1 w 11 + y H2 w 21 +... + y Hj w j1 +... + y Hm w m1
28
Three-layer network: Multilayer Neural Networks w 14
29
Commercial-quality neural networks often incorporate 4 or more layers Each layer consists of about 10-1000 individual neurons Experimental and research-based neural networks often use 5 or 6 (or more) layers Overall, millions of individual neurons may be used Multilayer Neural Networks
30
A back-propagation neural network is a multilayer neural network that propagates error backwards through the network as it learns Weights are modified based on the calculated error Training is complete when the error is below a specified threshold e.g. less than 0.001 Back-Propagation NNs
32
Use the sigmoid activation function; and apply Θ by connecting fixed input -1 to weight Θ w 14 Initially: w13 = 0.5, w14 = 0.9, w23 = 0.4, w24 = 1.0, w35 = -1.2, w45 = 1.1, 3 = 0.8, 4 = -0.1 and 5 = 0.3.
33
33 Step 2: Activation Activate the back-propagation neural network by applying inputs x 1 (p), x 2 (p),…, x n (p) and desired outputs y d,1 (p), y d,2 (p),…, y d,n (p). (a) Calculate the actual outputs of the neurons in the hidden layer: where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function.
34
34 (b) Calculate the actual outputs of the neurons in the output layer: Step 2 : Activation (continued) where m is the number of inputs of neuron k in the output layer.
35
35 We consider a training set where inputs x 1 and x 2 are equal to 1 and desired output y d,5 is 0. The actual outputs of neurons 3 and 4 in the hidden layer are calculated as We consider a training set where inputs x 1 and x 2 are equal to 1 and desired output y d,5 is 0. The actual outputs of neurons 3 and 4 in the hidden layer are calculated as Now the actual output of neuron 5 in the output layer is determined as: Now the actual output of neuron 5 in the output layer is determined as: Thus, the following error is obtained: Thus, the following error is obtained:
36
10/16/2015Intelligent Systems and Soft Computing36 Step 3: Weight training Update the weights in the back-propagation network propagating backward the errors associated with output neurons. (a) Calculate the error gradient for the neurons in the output layer: where Calculate the weight corrections: Update the weights at the output neurons:
37
10/16/2015Intelligent Systems and Soft Computing37 (b) Calculate the error gradient for the neurons in the hidden layer: Step 3: Weight training (continued) Calculate the weight corrections: Update the weights at the hidden neurons:
38
10/16/2015Intelligent Systems and Soft Computing38 The next step is weight training. To update the weights and threshold levels in our network, we propagate the error, e, from the output layer backward to the input layer. The next step is weight training. To update the weights and threshold levels in our network, we propagate the error, e, from the output layer backward to the input layer. First, we calculate the error gradient for neuron 5 in the output layer: First, we calculate the error gradient for neuron 5 in the output layer: Then we determine the weight corrections assuming that the learning rate parameter, , is equal to 0.1: Then we determine the weight corrections assuming that the learning rate parameter, , is equal to 0.1:
39
10/16/2015Intelligent Systems and Soft Computing39 Next we calculate the error gradients for neurons 3 and 4 in the hidden layer: Next we calculate the error gradients for neurons 3 and 4 in the hidden layer: We then determine the weight corrections: We then determine the weight corrections:
40
10/16/2015Intelligent Systems and Soft Computing40 At last, we update all weights and threshold: At last, we update all weights and threshold: The training process is repeated until the sum of squared errors is less than 0.001. The training process is repeated until the sum of squared errors is less than 0.001.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.