DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Introduction to Neural Networks Computing
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Financial Informatics –XVI: Supervised Backpropagation Learning
Simple Neural Nets For Pattern Classification
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Before we start ADALINE
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 21 Oct 28, 2005 Nanjing University of Science & Technology.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
ADALINE (ADAptive LInear NEuron) Network and
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Chapter 2 Single Layer Feedforward Networks
SUPERVISED LEARNING NETWORK
Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Introduction To Artificial Neural System. Artificial Neuron Y =f( Σw i X i ) Σ X 1 X 2 X 3 X n.
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Self-Organizing Network Model (SOM) Session 11
Artificial neural networks
Chapter 2 Single Layer Feedforward Networks
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Ranga Rodrigo February 8, 2014
CSE 473 Introduction to Artificial Intelligence Neural Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Biological and Artificial Neuron
Classification Neural Networks 1
Chapter 3. Artificial Neural Networks - Introduction -
Biological and Artificial Neuron
Artificial Neural Network & Backpropagation Algorithm
Artificial Intelligence Chapter 3 Neural Networks
Biological and Artificial Neuron
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Neural Networks Chapter 5
Neural Network - 2 Mayank Vatsa
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Chapter - 3 Single Layer Percetron
Artificial Neural Networks
The McCullough-Pitts Neuron
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Artificial Neural Networks ECE /ECE Fall 2006
General Aspects of Learning
Artificial Intelligence Chapter 3 Neural Networks
Artificial Neural Network learning
Artificial Neural Networks / Spring 2002
Presentation transcript:

DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII COURSE NEURAL NETWORKS SATENDRA Assistant Professor SITM,REWARI . Manav Rachna College of Engineering

TOPIC Fundamental concepts of Artificial Neural Networks Content FeedForward & Feedback Networks Learning Rules: Supervised & Unsupervised Hebbian Learning Rule, Perception Learning Rule, Delta Learning Rule, Widrow-Hoff Learning Rule Correction Learning Rule Winner –Take All Learning Rule Manav Rachna College of Engineering

Neural Network Architectures

Neural Network Learning Process

Neural Network Learning Rules The learning signal r in general a function of wi, x and sometimes of teacher’s signal di. Incremental weight vector wi at step t becomes: Where c is a learning constant having +ve value.

NN Learning Rule 1) Hebbian Learning Rule The Hebbian learning rule represents a purely feedforward, unsupervised learning. This learning rule requires the weight initialization at small random values around wi = 0 prior to learning. Here, Learning signal is as neuron’s output. Incremental weight vector becomes:- For single weight wij

2) Perceptron Learning Rule:- For the perceptron learning rule, the learning signal is between the desired and actual neuron's response. This learning is supervised learning. This rule is applicable for binary neuron response and relationships express the rule for the bipolar binary case. Learning Signal r becomes Weight adjustment becomes:-

Where η is a positive constant. 3) Delta Training Rule This rule valid for only continuous activation functions It is of supervised training mode. The learning signal of this rule is called delta and defined as Calculating the gradient vector with respect to wi of the squared error defined as Error gradient vector Since minimization of error requires the weight changes to be in the negative gradient direction, therefore Where η is a positive constant.

4) Windrow-Hoff Learning Rule The Widrow-Hoff learning is applicable for the supervised training of neural networks. It is independent of the activation function of neurons used since it minimizes the squared error between the desired output value d, and the neuron's activation value net The learning rule is defined as The weight vector increment under this learning rule is for the single weight the adjustment is This rule is also called LMS (least Mean square) learning rule. Weights are initialized at any values in this method.

5) Correlation Learning Rule This rule is typically applied for recording data in memory networks with binary response neurons. This is supervised learning rule. This learning rule also requires the weight initialization w = 0. As we know, general learning rule is :- And by substitution r=di into general rule, we get correlation rule. The adjustment for weight vector and single weights resp. are

6) Winner –Take- All Learning Rule It is used for unsupervised network training. The learning is based on the premise that one of the neurons in the layer say m’th has maximum response due to input x. This neuron is declared as Winner. As a result of this winning event, the weight vector Wm becomes Incremental weight adjustment Where α > 0 is a small learning constant. The winner selection is based on the following criterion of maximum activation among all p neurons participating in a competition

Thank you