Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.

Similar presentations


Presentation on theme: "Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered."— Presentation transcript:

1 Introduction to Neural Networks Freek Stulp

2 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered Feed-Forward Networks 3. Recurrent Networks Conclusion

3 3 Biological Background Neuron consists of: Cell body Dendrites Axon Synapses Neural activation : Throught dendrites/axon Synapses have different strengths

4 4 Artificial Neuron ajaj W ji Input links (dendrites) Unit (cell body) Output links (axon) aiai a i = g(in i ) in i =  a j W ji

5 5 Class I: Perceptron a = g(in) in =  a j W j a = g(-W 0 + W 1 a 1 + W 2 a 2 ) g(in) = 0, in<0 1, in>0 { a1a1 a2a2 IjIj a O W0W0 W1W1 W2W2 WjWj

6 6 Learning in Perceptrons Perceptrons can learn mappings from inputs I to outputs O by changing weights W Training set D: Inputs: I 0, I 1... I n Targets: T 0, T 1... T n Example: boolean OR D: Output O of network is not necessary equal to T! dIT 00 0 10 11 21 01 31 1

7 7 Learning in Perceptrons Error often defined as: E(W) = 1 / 2  d  D (t d -o d ) 2 Go towards the minimum error! Update rules: w i = w i  w i  w i = -  E/  w i  E/  w i =  /  w i 1 / 2  d  D (t d -o d ) 2 =  d  D (t d -o d )i id This is called gradient descent i

8 8 Class II: Multi-layer Feed-forward Networks Feed-forward: Output links only connected to input links in the next layer InputHiddenOutput Multiple layers: hidden layer(s) Complex non-linear functions can be represented

9 9 Learning in MLFF Networks For output layer, weight updating similar to perceptrons. Problem: What are the errors in the hidden layer? Backpropagation Algorithm For each hidden layer (from output to input) :  For each unit in the layer determine how much it contributed to the errors in the previous layer.  Adapt the weight according to this contribution This is also gradient descent

10 10 Class III: Recurrent Networks InputHiddenOutput No restrictions on connections Behaviour more difficult to predict/ understand

11 11 Conclusion Inspiration from biology, though artificial brains are still very far away. Perceptrons too simple for most problems. MLFF Networks good as function approximators. Many of your articles use these networks! Recurrent networks complex but useful too.

12 12 Literature Artificial Intelligence: A Modern Approach Stuart Russel and Peter Norvig Machine Learning Tom M. Mitchell


Download ppt "Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered."

Similar presentations


Ads by Google