3. Feedforward Nets (1) Architecture

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Multi-Layer Perceptron (MLP)
Backpropagation Learning Algorithm
EE 690 Design of Embodied Intelligence
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Lecture 14 – Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Neural Networks Basic concepts ArchitectureOperation.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural networks.
Classification Part 3: Artificial Neural Networks
Multiple-Layer Networks and Backpropagation Algorithms
© N. Kasabov Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, MIT Press, 1996 INFO331 Machine learning. Neural networks. Supervised.
Multi-Layer Perceptrons Michael J. Watts
Chapter 9 Neural Network.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Appendix B: An Example of Back-propagation algorithm
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
Artificial Intelligence Techniques Multilayer Perceptrons.
BACKPROPAGATION: An Example of Supervised Learning One useful network is feed-forward network (often trained using the backpropagation algorithm) called.
Multi-Layer Perceptron
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks 2nd Edition Simon Haykin
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
The Gradient Descent Algorithm
CS623: Introduction to Computing with Neural Nets (lecture-5)
Computing Gradient Hung-yi Lee 李宏毅
Derivation of a Learning Rule for Perceptrons
CS621: Artificial Intelligence
Prof. Carolina Ruiz Department of Computer Science
CSC 578 Neural Networks and Deep Learning
Goodfellow: Chap 6 Deep Feedforward Networks
Lecture 11. MLP (III): Back-Propagation
Neural Networks Advantages Criticism
Nondecreasing, Continuous/Discrete
Artificial Neural Network & Backpropagation Algorithm
Synaptic DynamicsII : Supervised Learning
of the Artificial Neural Networks.
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
Neural Networks Geoff Hulten.
Capabilities of Threshold Neurons
Backpropagation.
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
Ch4: Backpropagation (BP)
2. Matrix-Vector Formulation of Backpropagation Learning
CS623: Introduction to Computing with Neural Nets (lecture-5)
Computer Vision Lecture 19: Object Recognition III
Backpropagation.
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
Prediction Networks Prediction A simple example (section 3.7.3)
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Ch4: Backpropagation (BP)
Artificial Neural Networks / Spring 2002
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

3. Feedforward Nets (1) Architecture Ref. Paul Werbos, “Backpropagation through Time” Proc. IEEE 90/10 (1) Architecture Ordered Output Nodes 1 X m X 1 1 m m+1 i N+1 N+n Ordered Hidden Nodes 1 Y n Y

(2) Examples (Pseudo-layer form) : The following diagrams show only extra connections not present in a layered arch. like direct input-to-output connections.

` (3) Learning Rule For output nodes x j i k net F _ = d net F _ = d = desired j i k i net F _ = d k net F _ = d For output nodes

(4) Example 1 X 2 w 12 w 12 1 X δ2 δ3 δ4 Y 1 1 x 2 x 3 x 4 x 1 4 3 Y 1

Section 3.2,3.3 Summary: Using matrices and vectors, more compact description of the backpropagation equation is possible. It is easier to understand BackProp. MLP can be generalized to the Feedforward (FF) net that allows new connections for input to output, hidden to hidden, and output to output. The result is a more complex architecture with more connections that is more flexible. This added flexibility can enhance accuracy at the cost of complexity. Since the general FF net includes MLP as a special case, it will always perform better in learning accuracy. Further, it will stabilize learning, and robust to local minima – AI Expert Magazine, 91/7. The learning rule for the general FF net is identical to that of the MLP BP in that the error signals are back-propagated through the weights and nonlinear nodes. Although we did not derive it, it is obvious from the MLP BP. Keywords: Feed-forward net, Ordered nodes, Learning Rule, Chain Rule. Feedback.

Student Questions: 1. General Feedforward net or an MLP with more complexity is more flexible. So, it can approximate a more complex function. But, it may poorly generalize by overfitting to the training data. Should Be a way to compromise between model complexity and test data accuracy. 2. FF net has no feedback in its architecture. But, isn’t weight learning another kind of a feedback process ? 3. FF is more flexible than an MLP. How much so ? 4. How can the weight scale affect the output result ? 5. Can the input variable have different dimensions ?

6. The FF net has connections between the hidden nodes, too 6. The FF net has connections between the hidden nodes, too. Since the weight change for the left nodes is the result of accumulating the backpropagation of the error signals, can we say that the FF net will learn with fewer iterations than MLP ? 7. What are the advantages and disadvantages of FF and MLP ? 8. Which is more popular in use ? Or does it depend on specific applications ?