Download presentation
Presentation is loading. Please wait.
Published byThomas McDowell Modified over 6 years ago
1
CSE 473 Introduction to Artificial Intelligence Neural Networks
Henry Kautz Autumn 2003
4
Perceptron weighted sum of inputs (sigmoid unit) “soft” threshold
constant term weighted sum of inputs “soft” threshold
6
Training a Neuron Idea: adjust weights to reduce sum of squared errors over training set Error = difference between actual and intended output Algorithm: gradient descent Calculate derivative (slope) of error function Take a small step in the “downward” direction Step size is the “training rate”
7
Gradient of the Error Function
8
Gradient of the Error Function
9
Single Unit Training Rule
In short: adjust weights on inputs that were “on” in proportion to the error and the size of the output
10
Beyond Perceptrons Single units can learn any linear function
Single layer of units can learn any set of linear inequalities Adding additional layers of “hidden” units between input and output allows any function to be learned! Hidden units trained by propagating errors back through the network
14
Character Recognition Demo
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.