Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Introduction to Neural Networks Computing
NEURAL NETWORKS Perceptron
Longin Jan Latecki Temple University
Artificial Neural Networks
Kostas Kontogiannis E&CE
Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Perceptron.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Machine Learning Neural Networks
Performance Optimization
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Before we start ADALINE
CS 4700: Foundations of Artificial Intelligence
S. Mandayam/ ANN/ECE Dept./Rowan University Smart Sensors / Spring 2004 Shreekanth Mandayam ECE Department Rowan University Artificial.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Feed-Forward Neural Networks
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Computer Science and Engineering
Artificial Neural Networks
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Non-Bayes classifiers. Linear discriminants, neural networks.
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
ADALINE (ADAptive LInear NEuron) Network and
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Machine Learning Supervised Learning Classification and Regression
One-layer neural networks Approximation problems
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Derivation of a Learning Rule for Perceptrons
Machine Learning Today: Reading: Maria Florina Balcan
Biological and Artificial Neuron
Biological and Artificial Neuron
Artificial Intelligence Chapter 3 Neural Networks
Biological and Artificial Neuron
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Neural Networks ECE /ECE Fall 2006
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University

President UniversityErwin SitompulNNFL 2/2 Weights, need to be determined Biological neuron Artificial neuron Bias, need to be determined Learning ProcessesNeural Networks Biological and Artificial Neuron

President UniversityErwin SitompulNNFL 2/3 Learning ProcessesNeural Networks Application of Neural Networks Function approximation and prediction Pattern recognition Signal processing Modeling and control Machine learning

President UniversityErwin SitompulNNFL 2/4 Building a Neural Network Select Structure: design the way that the neurons are interconnected. Select weights: decide the strengths with which the neurons are interconnected. Weights are selected to get a “good match” of network output to the output of a training set. Training set is a set of inputs and desired outputs. The weight selection is conducted by the use of a learning algorithm. Learning ProcessesNeural Networks

President UniversityErwin SitompulNNFL 2/5 Stage 1: Network Training Training Data Stage 2: Network Validation Artificial neural network Input and output sets, adequate coverage Learning Process In the form of a set of optimized synaptic weights and biases Unseen Data From the same range as the training data Artificial neural network Implementation Phase Learning ProcessesNeural Networks Learning Process Knowledge Output Prediction

President UniversityErwin SitompulNNFL 2/6 Learning Process Learning is a process by which the free parameters of a neural network are adapted through a process of stimulation by the environment in which the network is embedded. In most cases, due to complex optimization plane, the optimized weights and biases are obtained as a result of a number of learning iterations. [w,b][w,b] x y [w,b] 0 x y(0) Initialize: Iteration (0) [w,b] 1 x y(1) Iteration (1) [w,b] n x y(n) ≈ d Iteration (n) ANN d : desired output … Learning ProcessesNeural Networks

President UniversityErwin SitompulNNFL 2/7 Learning Rules Learning ProcessesNeural Networks Error Correction Learning Delta Rule or Widrow-Hoff Rule Memory Based Learning Nearest Neighbor Rule Hebbian Learning Synchronous activation increases the synaptic strength Asynchronous activation decreases the synaptic strength Competitive Learning Boltzmann Learning

President UniversityErwin SitompulNNFL 2/8 wk1(n)wk1(n) x1x1 x2x2 xmxm Inputs Synaptic weights Bias Activation function wk2(n)wk2(n) w km (n)  Output y k (n) Desired output d k (n) ek (n)ek (n)  f (.) bk(n)bk(n) 1  Error signal Learning ProcessesNeural Networks Error-Correction Learning  Learning Rule

President UniversityErwin SitompulNNFL 2/9 Learning ProcessesNeural Networks Delta Rule (Widrow-Hoff Rule) Minimization of a cost function (or performance index)

President UniversityErwin SitompulNNFL 2/10 w kj (0) = 0 y k (n) = [w kj (n) x j (n)] w kj (n+1) = w kj (n) + [d k (n) – y k (n)] x j (n)  : learning rate, [0…1] n = n+1 n = 0 Least Means Squares Rule Learning ProcessesNeural Networks Delta Rule (Widrow-Hoff Rule)

President UniversityErwin SitompulNNFL 2/11 Learning ProcessesNeural Networks Learning Paradigm  ANN Error Desired Actual   Environment (Data) Teacher (Expert) Supervised Unsupervised Environment (Data) Delay ANN Delayed Reinforcement Learning Cost Function

President UniversityErwin SitompulNNFL 2/12 Single Layer PerceptronsNeural Networks Single Layer Perceptrons Output unit is independent of the others. Analysis can be limited to single output perceptron. Single-layer perceptron network is a network with all the inputs connected directly to the output(s).

President UniversityErwin SitompulNNFL 2/13 Single Layer PerceptronsNeural Networks Derivation of a Learning Rule for Perceptrons w1w1 w2w2 E(w)E(w) Key idea: Learning is performed by adjusting the weights in order to minimize the sum of squared errors on a training. Weights are updated repeatedly (in each epoch/iteration). Sum of squared errors is a classical error measure (e.g. commonly used in linear regression). Learning can be viewed as an optimization search problem in weight space.

President UniversityErwin SitompulNNFL 2/14 Single Layer PerceptronsNeural Networks Derivation of a Learning Rule for Perceptrons The learning rule performs a search within the solution's vector space towards a global minimum.  The error surface itself is a hyper-paraboloid but is seldom as smooth as is depicted below.  In most problems, the solution space is quite irregular with numerous pits and hills which may cause the network to settle down in a local minimum (not the best overall solution).  Epochs are repeated until stopping criterion is reached (error magnitude, number of iterations, change of weights, etc).

President UniversityErwin SitompulNNFL 2/15 Single Layer PerceptronsNeural Networks Derivation of a Learning Rule for Perceptrons Widrow [1962] x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km  Adaline (Adaptive Linear Element) Goal:

President UniversityErwin SitompulNNFL 2/16 Least Mean Squares (LMS) Single Layer PerceptronsNeural Networks The following cost function (error function) should be minimized:

President UniversityErwin SitompulNNFL 2/17 Single Layer PerceptronsNeural Networks Least Mean Squares (LMS) Letting f(w k ) = f (w k1, w k2, …, w km ) be a function over R m, then Defining

President UniversityErwin SitompulNNFL 2/18 ff ww ff ww df : positive df : zero df : negative go uphill plain go downhill ff ww To minimize f, we choose Single Layer PerceptronsNeural Networks Gradient Operator df is thus guaranteed to be always negative

President UniversityErwin SitompulNNFL 2/19 Single Layer PerceptronsNeural Networks Adaline Learning Rule With then As already obtained before, Weight Modification Rule Defining we can write

President UniversityErwin SitompulNNFL 2/20 Single Layer PerceptronsNeural Networks Adaline Learning Modes Batch Learning Mode Incremental Learning Mode

President UniversityErwin SitompulNNFL 2/21   - Learning Rule  LMS Algorithm  Widrow-Hoff Learning Rule Single Layer PerceptronsNeural Networks Adaline Learning Rule

President UniversityErwin SitompulNNFL 2/22 Single Layer PerceptronsNeural Networks Generalization and Early Stopping By proper training, a neural network may produce reasonable output for inputs not seen during training  Generalization Generalization is particularly useful for the analysis of a “noisy” data (e.g. time–series ) “Overtraining” will not improve the ability of a neural network to produce good output. On the contrary, it will try to take noise as the real data and lost its generality.

President UniversityErwin SitompulNNFL 2/23 Generalization and Early Stopping Single Layer PerceptronsNeural Networks Overfitting vs Generalization

President UniversityErwin SitompulNNFL 2/24 Homework 2 Single Layer PerceptronsNeural Networks Given a function y = 4x 2, you are required to find the value of x that will result y = 2 by using the Least Mean Squares method. Use initial estimate x 0 = 1 and learning rate η = Write down the results of the first 10 epochs/iterations. Give conclusion about your result. Note: Calculation can be done manually or using Matlab.

President UniversityErwin SitompulNNFL 2/25 Homework 2A Single Layer PerceptronsNeural Networks Given a function y = 2x 3 + cos 2 x, you are required to find the value of x that will result y = 5 by using the Least Mean Squares method. Use initial estimate x 0 = 0.2*Student ID and learning rate η = Write down the results of the first 10 epochs/iterations. Give conclusion about your result. Note: Calculation can be done manually or using Matlab/Excel.