Download presentation
Presentation is loading. Please wait.
Published byAlaina Jenkins Modified over 9 years ago
1
ANNs (Artificial Neural Networks)
3
THE PERCEPTRON
4
Perceptron X is a (column) vector of inputs. W is a (column) vector of weights. or w 0 is the bias or threshold weight.
5
Perceptron Usage: 1.training/learning – local vs. global minima – supervised vs. unsupervised 2.feedforward (or testing or usage or application) – Indicate class i if output g(X)>0; not class i otherwise.
6
Perceptron
7
The bias or threshold can be represented as simply another weight (w 0 ) with a constant input of 1 (x 0 =1).
8
Perceptron This is the dot product of two vectors, W and X.
9
Activation functions 1.linear 2.threshold 3.sigmoid 4.tanh
11
Relationship between sigmoid and tanh
12
Perceptron training/learning 1.Initialize weights (including thresholds, w 0 ) to random numbers in [-0.5,…,+0.5] (uniformly distributes). 2.Select training input vector, X, and desired output, d. 3.Calculate actual output, y. 4.Learn. (Only perform this step when output is incorrect.) where is in [0..1] and is the gain or learning rate.
13
“The proof of the perceptron learning theorem (Rosenblatt 1962) demonstrated that a perceptron could learn anything it could represent.” [3] So what can it represent? – Any problem that is linearly separable. – Are all problems linear separable?
14
Consider a binary function of n binary inputs. – “A neuron with n binary inputs can have 2 n different input patterns, consisting of ones and zeros. Because each input pattern can produce two different binary outputs, 1 and 0, there are 2 2 n different functions of n variables.” [3] – How many of these are separable? Not many! For n=6, 2 2 6 = 1.8x10 19 but only 5,028,134 are linearly separable. – AND and OR are linearly separable but XOR is not!
15
How about adding additional layers? “Multilayer networks provide no increase in computational power over a single-layer network unless there is a nonlinear function between layers.” [3] – That’s because matrix multiplication is associative. (XW 1 )W 2 = X(W 1 W 2 )
16
“It is natural to ask if every decision can be implemented by such a three-layer network … The answer, due ultimately to Kolmogorov …, is “yes” – any continuous function from input to output can be implemented in a three-layer net, given sufficient number of hidden units, proper nonlinearities, and weights.” [1]
17
MULTILAYER NETWORKS
18
Note threshold nodes from “Usefulness of artificial neural networks to predict follow-up dietary protein intake in hemodialysis patients” http://www.nature.com/ki/journal/v66/n1/fig_tab/4494599f1.html
26
Backpropagation (learning in a multilayer network)
27
Sigmoid/logistic function (and its derivative)
28
References 1.R.O. Duda, P.E. Hart, D.G. Stork, “Pattern Classification,” John Wiley and Sons, 2001. 2.S.K. Rogers, M. Kabrisky, “An introduction to biological and artificial neural networks for pattern recognition,” SPIE Optical Engineering Press, 1991. 3.P.D. Wasserman, “Neural computing,” Van Nostrand Reinhold, 1989.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.