Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Introduction to Neural Networks Computing
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Artificial Neural Networks
Learning Functions and Neural Networks II Lecture 9 Luoting Fu Spring 2012.
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Maths for Computer Graphics
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
BP - Review CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 Notation Consider a MLP with P input, Q hidden,
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
Radial Basis Functions
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華 教授.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Before we start ADALINE
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
CS 4700: Foundations of Artificial Intelligence
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華教授.
Neuron Model and Network Architecture
Prelude A pattern of activation in a NN is a vector A set of connection weights between units is a matrix Vectors and matrices have well-understood mathematical.
Multiple-Layer Networks and Backpropagation Algorithms
Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007.
Chapter 9 Neural Network.
ANNs (Artificial Neural Networks). THE PERCEPTRON.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Appendix B: An Example of Back-propagation algorithm
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Position Reconstruction in Miniature Detector Using a Multilayer Perceptron By Adam Levine.
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
An informal description of artificial neural networks John MacCormick.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Stefanos Zafeiriou Machine Learning(395) Course 395: Machine Learning – Math. Intro. Brief Intro to Matrices, Vectors and Derivatives: Equality: Two matrices.
Multi-Layer Perceptron
N. Saoulidou & G. Tzanakos1 ANN Basics : Brief Review N. Saoulidou, Fermilab & G. Tzanakos, Univ. of Athens.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
ADALINE (ADAptive LInear NEuron) Network and
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Neural Network Terminology
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Neural Networks 2nd Edition Simon Haykin
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Other Classification Models: Neural Network
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
CSE 473 Introduction to Artificial Intelligence Neural Networks
Derivation of a Learning Rule for Perceptrons
Neural Networks Advantages Criticism
network of simple neuron-like computing elements
Multi-Layer Perceptron
Backpropagation.
Forward and Backward Max Pooling
Backpropagation.
Artificial Neural Networks / Spring 2002
Presentation transcript:

Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002

Formula* *J.N. Hwang, J.J. Choi, S. Oh & R.J. Marks II, IEEE TNN #2, vol 1, 1991, p row -1 row … …has N neurons input = row 0 output = row L u m ( ) f(u ) u a m ( ) neuron j in layer -1 neuron i in layer w ij ( ) initial condition W( )

The Sigmoid If, then. Kronecker Matrix Products Vectors: x=[x m ], b=[b m ] vector y = x*b = [x m b m ] Matrix X=[x 1, x 2,…, x J ] matrix Y=X*b =[x 1 *b, x 2 *b,…, x J *b ]. Vector Reciprocals c = 1/b has elements [c m ]=[1/b m ].

1. Sensitivity: Initialization Neural Network Parameters Number of Layers, L. Number of neurons in each row: N 0, N 1, …, N, … N L. In addition, each row, except the output, has a bias node. Neural network weights { W( ) ; 0   L -1< L }  L matrices. W( )=[w mj ( )]  an ( (N +1)  ( N +1 +1) ) matrix Input Operating Point Vector, a 0 (0)=bias a(0)= [a 0 (0), a 1 (0), a 2 (0), …, a (0), … a N L (0)] T

2. Sensitivity: The Forward Step For the given operating point, compute the activations and states of all neurons. All operations in pseudocode are matrix vector operations. For = 0 : L-1 u( +1)=W( +1) a( ) a( +1)=f ( u ( +1 ) ); exposes each element of a vector to a sigmoid nonlinearity. End

3. Sensitivity: The Backward Step Generate sensitivities from the output to the input starting with  (L) = I = identity matrix. In general,  ( ) = [  ij ( ) ]. For = L-1:0  ( )=  ( +1) [W( +1)* f `(u ( +1 ) ) ] End If f is a sigmoid, For = L-1:0  ( )=  ( +1) {W( +1) *[a ( +1 ) * (ones - a ( +1 ) ) ]} End

The Final Sensitivity Matrix The ki th element of the matrix is the partial derivative sensitivity

The Final Sensitivity Matrices The percentage sensitivity is A matrix of these values is  (0)=  (0)*{ [1/ a(L)] [a(0)] T } Or is it:  (0)=  (0)*{ [a(0)][1/ a(L)] T }