1 Neural Networks MUMT 611 Philippe Zaborowski April 2005
2 Table Of Contents Background Examples Types of Neural Networks Applet
3 What are neural nets? A software model that tries to simulate the learning process “Inspired” by brain cells called neurons Unlike the human brain, neural nets have an unchangeable structure
4 The Neuron
5 The Artificial Neuron
6 Neuron Layers
7 Learning Process Supervised: Input pattern => Target pattern 0001 => 001 0010 => 010 Unsupervised: No target output Selforganization
8 Example: Forwardpropagation Input Pattern => Target Pattern 01 => 0 11 => 1
9 Example: Forwardpropagation Input 1 of output neuron: 0 * 0.35 = 0 Input 2 of output neuron: 1 * 0.81 = 0.81 Add the inputs: = 0.81 (= output) Error: = Value for changing weight 1: 0.25 * 0 * (-0.81) = 0 Value for changing weight 2: 0.25 * 1 * (-0.81) = Change weight 1: = 0.35 (not changed) Change weight 2: ( ) =
10 Example: Forwardpropagation Input 1 of output neuron: 1 * 0.35 = 0.35 Input 2 of output neuron: 1 * = Add the inputs: = (= output) Error: = Value for changing weight 1: 0.25 * 1 * = Value for changing weight 2: 0.25 * 1 * = Change weight 1: = Change weight 2: = Finally we compute net error for both operations: (-0.81)2 + (0.0425)2 =
11 Applications Image processing Pattern classification Speech analysis Optimization problems Robot steering
12 Perceptron (Rosenblatt 58) Type: feedforward Layers: 1 input 1 output Input: binary Activation: hard limiter Learning method: supervised Learning algorithm: Hebb Use: Simple logical operations Pattern classification
13 Multi-Layer-Perceptron (Minsky 69) Type: feedforward Layers: 1 input 1 or more hidden 1 output Input: binary Activation: hard limiter/sigmoid Learning method: supervised Learning algorithm: backpropagation Use: Complex logical operations Pattern classification
14 Backpropagation (Hinton 86) Type: feedforward Layers: 1 input 1 or more hidden 1 output Input: binary Activation: sigmoid Learning method: supervised Learning algorithm: backpropagation Use: Complex logical operations Pattern classification Speech analysis
15 Hopfield (Hopfield 82) Type: feedback Layers: 1 matrix Input: binary Activation: hard limiter/signum Learning method: unsupervised Learning algorithm: Delta learning rule Simulated annealing Use: Pattern association Optimization problems
16 Kohonen (Kohonen 82) Type: feedforward Layers: 1 input 1 map layer Input: binary or real Activation: sigmoid Learning method: unsupervised Learning algorithm: self organization Use: Pattern classification Optimization problems Simulation