Download presentation
Presentation is loading. Please wait.
Published byEugene Phelps Modified over 8 years ago
1
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005
2
2 Table Of Contents Background Examples Types of Neural Networks Applet
3
3 What are neural nets? A software model that tries to simulate the learning process “Inspired” by brain cells called neurons Unlike the human brain, neural nets have an unchangeable structure
4
4 The Neuron
5
5 The Artificial Neuron
6
6 Neuron Layers
7
7 Learning Process Supervised: Input pattern => Target pattern 0001 => 001 0010 => 010 Unsupervised: No target output Selforganization
8
8 Example: Forwardpropagation Input Pattern => Target Pattern 01 => 0 11 => 1
9
9 Example: Forwardpropagation Input 1 of output neuron: 0 * 0.35 = 0 Input 2 of output neuron: 1 * 0.81 = 0.81 Add the inputs: 0 + 0.81 = 0.81 (= output) Error: 0 - 0.81 = -0.81 Value for changing weight 1: 0.25 * 0 * (-0.81) = 0 Value for changing weight 2: 0.25 * 1 * (-0.81) = 0.2025 Change weight 1: 0.35 + 0 = 0.35 (not changed) Change weight 2: 0.81 + (-0.2025) = 0.6075
10
10 Example: Forwardpropagation Input 1 of output neuron: 1 * 0.35 = 0.35 Input 2 of output neuron: 1 * 0.6075 = 0.6075 Add the inputs: 0.35 + 0.6075 = 0.9575 (= output) Error: 1 - 0.9575 = 0.0425 Value for changing weight 1: 0.25 * 1 * 0.0425 = 0.010625 Value for changing weight 2: 0.25 * 1 * 0.0425 = 0.010625 Change weight 1: 0.35 + 0.010625 = 0.360625 Change weight 2: 0.6075 + 0.010625 = 0.618125 Finally we compute net error for both operations: (-0.81)2 + (0.0425)2 = 0.65790625
11
11 Applications Image processing Pattern classification Speech analysis Optimization problems Robot steering
12
12 Perceptron (Rosenblatt 58) Type: feedforward Layers: 1 input 1 output Input: binary Activation: hard limiter Learning method: supervised Learning algorithm: Hebb Use: Simple logical operations Pattern classification
13
13 Multi-Layer-Perceptron (Minsky 69) Type: feedforward Layers: 1 input 1 or more hidden 1 output Input: binary Activation: hard limiter/sigmoid Learning method: supervised Learning algorithm: backpropagation Use: Complex logical operations Pattern classification
14
14 Backpropagation (Hinton 86) Type: feedforward Layers: 1 input 1 or more hidden 1 output Input: binary Activation: sigmoid Learning method: supervised Learning algorithm: backpropagation Use: Complex logical operations Pattern classification Speech analysis
15
15 Hopfield (Hopfield 82) Type: feedback Layers: 1 matrix Input: binary Activation: hard limiter/signum Learning method: unsupervised Learning algorithm: Delta learning rule Simulated annealing Use: Pattern association Optimization problems
16
16 Kohonen (Kohonen 82) Type: feedforward Layers: 1 input 1 map layer Input: binary or real Activation: sigmoid Learning method: unsupervised Learning algorithm: self organization Use: Pattern classification Optimization problems Simulation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.