Download presentation
Presentation is loading. Please wait.
Published byBudi Kusnadi Modified over 6 years ago
1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Henry Kautz Autumn 2005
4
Perceptron weighted sum of inputs (sigmoid unit) “soft” threshold
constant term weighted sum of inputs “soft” threshold
6
Training a Neuron Idea: adjust weights to reduce sum of squared errors over training set Error = difference between actual and intended output Algorithm: gradient descent Calculate derivative (slope) of error function Take a small step in the “downward” direction Step size is the “training rate”
7
Gradient of the Error Function
8
Gradient of the Error Function
9
Single Unit Training Rule
In short: adjust weights on inputs that were “on” in proportion to the error and the size of the output
10
Beyond Perceptrons Single units can learn any linear function
Single layer of units can learn any set of linear inequalities Adding additional layers of “hidden” units between input and output allows any function to be learned! Hidden units trained by propagating errors back through the network
14
Character Recognition Demo
15
Beyond Backprop… Backpropagation is the most common algorithm for supervised learning with feed-forward neural networks Many other learning rules for these and other cases have been studied
16
Hebbian Learning Alternative to backprop for unsupervised learning
Increase weights on connected neurons whenever both fire simultaneously Neurologically plausible (Hebbs 1949)
17
Self-organizing maps (SOMs)
Unsupervised learning for clustering inputs “Winner take all” network one cell per cluster Learning rule: update weights near “winning” neuron to make it closer to the input
18
Recurrent Neural Networks
Include time-delay feedback loops Can handle temporal data tasks, such as sequence prediction
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.