Download presentation
Presentation is loading. Please wait.
Published byKerry Griffin Modified over 8 years ago
1
CS206Evolutionary Robotics Artificial Neural Networks 0.60.9 0.6 Input Layer Output Layer ++ 0.60.9 1.5 Input Layer Output Layer ++ ++
2
CS206Evolutionary Robotics Synaptic Weights 0.60.9 0.6 Input Layer Output Layer ++ 0.60.9 1.5 Input Layer Output Layer ++ ++ 0.60.9 1.290.09 Input Layer Output Layer -0.3 0.9 0.8 0.3
3
CS206Evolutionary Robotics Activation functions: keeping neuron values within limits 0.60.9 1.290.09 Input Layer Output Layer -0.3 0.9 0.8 σ(0.6*-0.3 + 0.9*0.3)= σ(0.09)=0.09 σ(0.6*0.8 + 0.9*0.9)= σ(1.29) = 1.0 0.3 σ(x) = ?
4
CS206Evolutionary Robotics Neural networks as functions output 0. input1input2output 000 010 100 111 Truth Table: σ(x) = 1:x > 0:x <= input1input2 σ(0* + 0* ) = σ( ) = 0 σ(0* + 1* ) = σ( ) = 0 σ(1* + 0* ) = σ( ) = 0 σ(1* + 1* ) = σ( ) = 1
5
CS206Evolutionary Robotics Neural networks as functions output 0. input1input2output 000 011 101 111 Truth Table: or σ(x) = 1:x > 0:x <= input1input2 σ(0* + 0* ) = σ( ) = 0 σ(0* + 1* ) = σ( ) = 1 σ(1* + 0* ) = σ( ) = 1 σ(1* + 1* ) = σ( ) = 1
6
CS206Evolutionary Robotics Neural networks as functions output 0. input1input2output 000 011 101 110 Truth Table: xor σ(x) = 1:x > 0:x <= input1input2 σ(0* + 0* ) = σ( ) = 0 σ(0* + 1* ) = σ( ) = 1 σ(1* + 0* ) = σ( ) = 1 σ(1* + 1* ) = σ( ) = 0
7
CS206Evolutionary Robotics Hidden neurons allow for nonlinear transformations 0 1 output input1input2 ( ) σ(x) = 1:x > ( ) 0:x <= ( ) input1input2output 000 011 101 110 Truth Table: xor
8
CS206Evolutionary Robotics Hidden neurons allow for nonlinear transformations 1 1 output input1input2 ( ) σ(x) = 1:x > ( ) 0:x <= ( ) input1input2output 000 011 101 110 Truth Table: xor
9
CS206Evolutionary Robotics Backpropagation: Learning synaptic weights through training 01 1 1.0 input1input2 1 1.0 Input1input2out1out2 00…… 0110 10…… 11…… Truth Table: 1 0 Error: 0 1
10
CS206Evolutionary Robotics Backpropagation: Learning synaptic weights through training 01 0 1.0 0.0 input1input2 1 0.0 1.0 Input1input2out1out2 00…… 0110 10…… 11…… Truth Table: 1 0 Error: 0 0 localized change to the synaptic weights low chance of increasing error for another input pattern “using tweezers on the neural network”
11
CS206Evolutionary Robotics Overfitting: Failing to learn the proper relationship between input and output 1 0 0 input1input2 1 1 0 inputk yes “patient 1?” “patient 2?” “patient n?” … … … … symptom 1 …symptom k Disease ? patient1101yes patient2001no …. patient n100no Patient 1’s diagnosis (yes/no) Patient n’s diagnosis (yes/no)
12
CS206Evolutionary Robotics Recurrent connections: Adding memory to a neural network 01 1 sensor1sensor2 1 motor1 motor2 10 1 sensor1sensor2 ? motor1 motor2 Robot at time step tRobot at time step t+1 Value from time step t Value from Current time step Value of motor 1 is a function of the current sensor values, and the value of motor 2 from the previous time step.
13
CS206Evolutionary Robotics Recurrent connections: Adding memory to a neural network 01 1 1.0 input1input2 1 1.0 Input1input2out1out2 00…… 0110 10…… 11…… Truth Table: 1 0 Error: 0 1
14
CS206Evolutionary Robotics Backpropagation: Does not work for recurrent connections 01 1 1.0 input1input2 1 1.0 Input1input2out1out2 00…… 0110 10…… 11…… Truth Table: 1 0 Error: 0 1 global change to the synaptic weights good chance of increasing error for another input pattern “using a sledge hammer on the neural network”
15
CS206Evolutionary Robotics Supervised Learning: Correct output for each input pattern is known. Input1…InputnOutput1…Outputm 1…01…1 0…10…1 1…10…0
16
CS206Evolutionary Robotics Unsupervised Learning: Reward signal returned for a series of input patterns. Sensor1…SensornMotor1…Motorm 1…0… 0…1… 1…1… Distance:6.3 meters
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.