Download presentation
Presentation is loading. Please wait.
1
Artificial neural networks
Šarūnas Stanskis
2
Some history Warren McCulloch and Walter Pitts introduced models of neurological networks, recreated threshold switches based on neurons and showed that even simple networks of this kind are able to calculate nearly any logic or arithmetic function. 1949: Donald O. Hebb formulated the classical Hebbian rule which represents in its more generalized form the basis of nearly all neural learning procedures. 1951: For his dissertation Marvin Minsky developed the neurocomputer Snark, which has already been capable to adjust its weights automatically …
3
Neuroniniai tinklai Biological neural network
Artificial neural network
4
Biological and artificial neurons
Biological neuron Artificial neuron
5
Neuron model Output Imputs with corresponding weights
x1,w1 x2,wn ... xn,wn y1 f(∑xi *wi) i=1 n Output Imputs with corresponding weights Summation and transfer function Decision boundary
6
Percepton Percepton model
7
Activation Function examples
Linear Treshold Logic sigmoid Tangent sigmoid
8
Feedforward ANN Feedforward ANN includes: Single layer perceptron
Multilayer perceptron Radial basis function network Multilayer Feed Forward ANN example
9
Feedback ANN Recurrent ANN example
10
ANN learning The purpose of ANN learning is to minimize error between its response and actual wanted result. ANN learning algorithms can be divided into three categories: Supervised Learning with a Teacher; Supervised Learning with Reinforcement; Unsupervised Learning.
11
Supervised Learning with a Teacher
The network is provided with a set of inputs and the appropriate outputs for those inputs.
12
Supervised Learning with Reinforcement
The network is provided with a evaluation of its output given the input and alters the weights to try to increase the reinforcement it receives.
13
Unsupervised Learning
The network receives no external feedback but has an internal criterion that it tries to fulfil given the inputs that it faces.
14
Backpropagation Backward propagation of errors
15
Backpropagation First apply the inputs to the network and work out the output. Next work out the errors for neurons in output layer. Change the weights. Calculate the Errors for the hidden layer neurons by using proceeding layer neutron errors. Change the hidden layer weights.
16
Problems related to ANN learning
Finding local minimum instead of global minimum. Neural network overfitting to training data.
17
ANN application ANN applications can be divided into to main tasks:
Classification Regression
18
ANN application More detailed examples: Image and signal analysis
Character recognition Sales forecasting Industrial process control Customer research Data validation Risk management Target marketing
19
Questions What are 2 different ANN architecture category types?
Name at least two activation functions. And why are they needed? What are ANN learning categories? What problems can arise during ANN learning? Name some ANN applications.
20
Thank you for your attention
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.