Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Backpropagation Learning Algorithm
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Artificial Neural Networks
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
The back-propagation training algorithm
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Back-Propagation Algorithm
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Before we start ADALINE
Data Mining with Neural Networks (HK: Chapter 7.5)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Dr.-Ing. Erwin Sitompul President University Lecture 1 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 1/1
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Multi-Layer Perceptron
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
ADALINE (ADAptive LInear NEuron) Network and
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Chapter 8: Adaptive Networks
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Neural Networks 2nd Edition Simon Haykin
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Dr.-Ing. Erwin Sitompul President University Lecture 1 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 1/1
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Multiple-Layer Networks and Backpropagation Algorithms
One-layer neural networks Approximation problems
第 3 章 神经网络.
CSE 473 Introduction to Artificial Intelligence Neural Networks
Derivation of a Learning Rule for Perceptrons
Dr. Unnikrishnan P.C. Professor, EEE
Biological and Artificial Neuron
Biological and Artificial Neuron
CSC 578 Neural Networks and Deep Learning
Artificial Neural Network & Backpropagation Algorithm
Artificial Intelligence Chapter 3 Neural Networks
Biological and Artificial Neuron
Neural Network - 2 Mayank Vatsa
Multi-Layer Perceptron
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Artificial Intelligence Chapter 3 Neural Networks
Artificial Neural Networks / Spring 2002
Presentation transcript:

Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University

President UniversityErwin SitompulNNFL 3/2 Single Layer PerceptronsNeural Networks Derivation of a Learning Rule for Perceptrons Widrow [1962] x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km  Adaline (Adaptive Linear Element) Goal:

President UniversityErwin SitompulNNFL 3/3 Least Mean Squares (LMS) Single Layer PerceptronsNeural Networks The following cost function (error function) should be minimized: i: index of data set, the i th data set j : index of input, the j th input

President UniversityErwin SitompulNNFL 3/4 Single Layer PerceptronsNeural Networks Adaline Learning Rule With then As already obtained before, Weight Modification Rule Defining we can write

President UniversityErwin SitompulNNFL 3/5 Single Layer PerceptronsNeural Networks Adaline Learning Modes Batch Learning Mode Incremental Learning Mode

President UniversityErwin SitompulNNFL 3/6 Tangent Sigmoid Activation Function Single Layer PerceptronsNeural Networks Goal: x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km 

President UniversityErwin SitompulNNFL 3/7 Logarithmic Sigmoid Activation Function Single Layer PerceptronsNeural Networks Goal: x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km 

President UniversityErwin SitompulNNFL 3/8 Single Layer PerceptronsNeural Networks Derivation of Learning Rules For arbitrary activation function,

President UniversityErwin SitompulNNFL 3/9 Derivation of Learning Rules Single Layer PerceptronsNeural Networks Depends on the activation function used

President UniversityErwin SitompulNNFL 3/10 Derivation of Learning Rules Single Layer PerceptronsNeural Networks Linear functionTangent sigmoid function Logarithmic sigmoid function

President UniversityErwin SitompulNNFL 3/11 Derivation of Learning Rules Single Layer PerceptronsNeural Networks

President UniversityErwin SitompulNNFL 3/12 Homework 3 Single Layer PerceptronsNeural Networks x1x1 x2x2 w11w11 w12w12  Case 1 [x 1 ;x 2 ] = [2;3] [y 1 ] = [5] Case 2 [x 1 ;x 2 ] = [[2 1];[3 1]] [y 1 ] = [5 2] Use initial values w 11 =1 and w 12 =1.5, and η = Determine the required number of iterations. Note: Submit the m-file in hardcopy and softcopy. Given a neuron with linear activation function (a=0.5), write an m-file that will calculate the weights w 11 and w 12 so that the input [x 1 ;x 2 ] can match output y 1 the best. Odd-numbered Student ID Even-numbered Student ID

President UniversityErwin SitompulNNFL 3/13 Homework 3A Single Layer PerceptronsNeural Networks x1x1 x2x2 w11w11 w12w12  [x 1 ] = [ ] [x 2 ] = [ ] [y 1 ] = [ ] Use initial values w 11 =0.5 and w 12 =–0.5, and η = Determine the required number of iterations. Note: Submit the m-file in hardcopy and softcopy. Given a neuron with a certain activation function, write an m-file that will calculate the weights w 11 and w 12 so that the input [x 1 ;x 2 ] can match output y 1 the best. Even Student ID: Linear function Odd Student ID: Logarithmic sigmoid function ?

President UniversityErwin SitompulNNFL 3/14 MLP Architecture Multi Layer PerceptronsNeural Networks y1y1 y2y2 Input layer Hidden layers Output layer Inputs Outputs x1x1 x2x2 x3x3 w ji w kj w lk Possesses sigmoid activation functions in the neurons to enable modeling of nonlinearity. Contains one or more “hidden layers”. Trained using the “Backpropagation” algorithm.

President UniversityErwin SitompulNNFL 3/15 MLP Design Consideration Multi Layer PerceptronsNeural Networks W hat activation functions should be used? How many inputs does the network need? How many hidden layers does the network need? How many hidden neurons per hidden layer? How many outputs should the network have? There is no standard methodology to determine these values. Even there is some heuristic points, final values are determinate by a trial and error procedure.

President UniversityErwin SitompulNNFL 3/16 Advantages of MLP Multi Layer PerceptronsNeural Networks x1x1 x2x2 x3x3 wjiwji w kj w lk MLP with one hidden layer is a universal approximator. MLP can approximate any function within any preset accuracy The conditions: the weights and the biases are appropriately assigned through the use of adequate learning algorithm. MLP can be applied directly in identification and control of dynamic system with nonlinear relationship between input and output. MLP delivers the best compromise between number of parameters, structure complexity, and calculation cost.

President UniversityErwin SitompulNNFL 3/17 Learning Algorithm of MLP Multi Layer PerceptronsNeural Networks f(.) Function signal Error signal Forward propagation Backward propagation Computations at each neuron j:  Neuron output, y j  Vector of error gradient, E/w ji “Backpropagation Learning Algorithm”

President UniversityErwin SitompulNNFL 3/18 If node j is an output node, yi(n)yi(n) w ji (n) net j (n) f(.) yj(n)yj(n) dj(n)dj(n) ej(n)ej(n) Backpropagation Learning Algorithm Multi Layer PerceptronsNeural Networks

President UniversityErwin SitompulNNFL 3/19 y i (n) w ji (n) net j (n) f(.) yj(n)yj(n) w kj (n) net k (n) f(.) yk(n)yk(n) dk(n)dk(n) ek(n)ek(n) Backpropagation Learning Algorithm If node j is a hidden node, Multi Layer PerceptronsNeural Networks

President UniversityErwin SitompulNNFL 3/20 MLP Training Backward Pass Calculate  j (n) Update weights w ji (n+1) Forward Pass Fix w ji (n) Compute y j (n) i j k Left Right i j k Left Right Multi Layer PerceptronsNeural Networks