Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks (2nd part) Eduard Petlenkov, Automaatikainstituut.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Biointelligence Laboratory, Seoul National University
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Dünaamiliste süsteemide modelleerimine Identification for control in a non- linear system world Eduard Petlenkov, Automaatikainstituut, 2013.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Variations on Backpropagation.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Neural Networks 2nd Edition Simon Haykin
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Supervised Learning in ANNs
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
Ranga Rodrigo February 8, 2014
CSE 473 Introduction to Artificial Intelligence Neural Networks
Derivation of a Learning Rule for Perceptrons
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
CSC 578 Neural Networks and Deep Learning
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Networks Chapter 5
Neural Network - 2 Mayank Vatsa
Multi-Layer Perceptron
Backpropagation.
Artificial Neural Network
Backpropagation Disclaimer: This PPT is modified based on
Artificial Neural Networks
Chapter - 3 Single Layer Percetron
Backpropagation.
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Artificial Neural Networks / Spring 2002
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks (2nd part) Eduard Petlenkov, Automaatikainstituut Eduard Petlenkov, Automaatikainstituut, 2015

Supervised learning Eduard Petlenkov, Automaatikainstituut, 2015 Supervised learning is a training algorithm based on known input-output data. X is an input vector; Yp is a vector of reference outputs corresponding to input vector X Y is a vector of NN’s outputs corresponding to input vector X NN is a mathematica function of the network (Y=NN(X))

Õpetamine (2) Eduard Petlenkov, Automaatikainstituut, 2015 Network training procedure consists of the following 3 steps: 1. Computation of NN’s outputs corresponding to the current NN’s paraneters; 2. Computation of NN’s error; 3. Updating NN’s parameters by using selected training algorithm in order to minimize the error.

Gradient descent error backpropagation (1) Eduard Petlenkov, Automaatikainstituut, 2015 Error function: - NN’s output - inputs used for training; - NN’s function; - etalons (references) for outputs=desired outputs. function to be minimized:

Gradient descent error backpropagation (2) Eduard Petlenkov, Automaatikainstituut, 2015

Global minimum problem Eduard Petlenkov, Automaatikainstituut, 2015

Training of a double-layer perceptron using Gradient descent error backpropagation (1) Eduard Petlenkov, Automaatikainstituut, 2015 i - number of input; j – number of neuron in the hidden layer; k – number of output neuron; NN’s input at time instance t; ; - Values oh the hidden layer biases at time instance t; - Values oh the hidden layer weighting coefficients at time instance t; - Activation function of the hidden layer neurons - Outputs of the output layer neurons at time instance t; - Values oh the output layer weighting coefficients at time instance t; - Values oh the output layer biases at time instance t; - Activation function of the output layer neurons; - Outputs of the network at time instance t;

Eduard Petlenkov, Automaatikainstituut, 2015 error: gradients: Signals distributing information about error from output layer to previous layers (that is why the method is called error backpropagation): Are derivatives of the corresponding layers activation functions. Training of a double-layer perceptron using Gradient descent error backpropagation (2)

Eduard Petlenkov, Automaatikainstituut, 2015 Updated weighting coefficients: learning rate Training of a double-layer perceptron using Gradient descent error backpropagation (3)

Gradient Descent (GD) method vs Levenberg-Marquardt (LM) method - Gradient - Hessian GD:LM: Eduard Petlenkov, Automaatikainstituut, 2015

Näide MATLABis (1) P1=(rand(1,100)-0.5)*20 % Juhuslikud sisendid. 100 väärtust vahemikus [ ] P2=(rand(1,100)-0.5)*20 P=[P1;P2] % Närvivõrgu sisendite moodustamine T=0.3*P1+0.9*P2 % Vastavate väljundite arvutamine net=newff([-10 10;-10 10],[5 1],{'purelin' 'purelin'},'traingd') % Närvivõrgu genereerimine esimene sisend [min max] teinesisend [min max] neuronite arvud igas kihis Viimase kihi neuronite arv = väljundite arv Esimese kihi neuronite aktiveerimisfunktsioon Teise kihi neuronite aktiveerimisfunktsioon õppimisalgoritm net.trainParam.show=1; net.trainParam.epochs=100; Treenimise parameetrid

net=train(net,P,T) % Närvivõrgu treenimine out=sim(net,[3 -2]) % Närvivõrgu simuleerimine Esimese sisendi väärtus Teise sisendi väärtus W1=net.IW{1,1} % esimese kihi kaalukoefitsientide maatriks W2=net.LW{2,1} % teise kihi kaalukoefitsientide maatriks B1=net.b{1} % esimese kihi neuronite nihete vektor B2=net.b{2} % teise kihi neuronite nihete vektor Näide MATLABis (2) Eduard Petlenkov, Automaatikainstituut, 2014