A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Multi-Layer Perceptron (MLP)
1 Image Classification MSc Image Processing Assignment March 2003.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
PROTEIN SECONDARY STRUCTURE PREDICTION WITH NEURAL NETWORKS.
Lecture 14 – Neural Networks
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks An Overview and Analysis.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Explorations in Neural Networks Tianhui Cai Period 3.
© Copyright 2004 ECE, UM-Rolla. All rights reserved A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Appendix B: An Example of Back-propagation algorithm
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Multi-Layer Perceptron
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
CS621 : Artificial Intelligence
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Neural Networks 2nd Edition Simon Haykin
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Networks.
Artificial Neural Networks
Artificial neural networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification / Regression Neural Networks 2
CSE 473 Introduction to Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Machine Learning Today: Reading: Maria Florina Balcan
Artificial Neural Network & Backpropagation Algorithm
of the Artificial Neural Networks.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Network - 2 Mayank Vatsa
CS 621 Artificial Intelligence Lecture 25 – 14/10/05
Capabilities of Threshold Neurons
Backpropagation.
Backpropagation.
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
PYTHON Deep Learning Prof. Muhammad Saeed.
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch

Overview Relation to Biological Brain: Biological Neural Network The Artificial Neuron Types of Networks and Learning Techniques Supervised Learning & Backpropagation Training Algorithm Learning by Example Applications Questions

Biological Neuron

Artificial Neuron Σ f(n) W W W W Outputs Activation Function INPUTSINPUTS W =Weight Neuron

Transfer Functions 1 0 Input Output

Types of networks Multiple Inputs and Single Layer Multiple Inputs and layers

Types of Networks – Contd. Feedback Recurrent Networks

Learning Techniques Supervised Learning: Inputs from the environment Neural Network Actual System Σ Error + - Expected Output Actual Output Training

Multilayer Perceptron Inputs First Hidden layer Second Hidden Layer Output Layer

Signal Flow Backpropagation of Errors Function Signals Error Signals

Learning by Example Hidden layer transfer function: Sigmoid function = F(n)= 1/(1+exp(-n)), where n is the net input to the neuron. Derivative= F’(n) = (output of the neuron)(1- output of the neuron) : Slope of the transfer function. Output layer transfer function: Linear function= F(n)=n; Output=Input to the neuron Derivative= F’(n)= 1

Learning by Example Training Algorithm: backpropagation of errors using gradient descent training. Colors: –Red: Current weights –Orange: Updated weights –Black boxes: Inputs and outputs to a neuron –Blue: Sensitivities at each layer

First Pass Error= = G3=(1)(0.3492)= G2= (0.6508)( )(0.3492)(0.5)= G1= (0.6225)( )(0.0397)(0.5)(2)= Gradient of the neuron= G =slope of the transfer function ×[Σ{ (weight of the neuron to the next neuron) × ( output of the neuron)}] Gradient of the output neuron = slope of the transfer function × error

Weight Update 1 New Weight=Old Weight + {(learning rate)(gradient)(prior output)} 0.5+(0.5)(0.3492)(0.6508) (0.5)(0.0397)(0.6225) 0.5+(0.5)(0.0093)(1)

Second Pass Error= = G3=(1)(0.1967)= G2= (0.6545)( )(0.1967)(0.6136)= G1= (0.6236)( )(0.5124)(0.0273)(2)=0.0066

Weight Update 2 New Weight=Old Weight + {(learning rate)(gradient)(prior output)} (0.5)(0.1967)(0.6545) (0.5)(0.0273)(0.6236) (0.5)(0.0066)(1)

Third Pass

Weight Update Summary W1: Weights from the input to the input layer W2: Weights from the input layer to the hidden layer W3: Weights from the hidden layer to the output layer

Training Algorithm The process of feedforward and backpropagation continues until the required mean squared error has been reached. Typical mse: 1e-5 Other complicated backpropagation training algorithms also available.

Why Gradient? O1 O2 O = Output of the neuron W = Weight N = Net input to the neuron W1 W2 N = (O1×W1) +(O2×W 2) O3 = 1/[1+exp(-N)] Error = Actual Output – O3 To reduce error: Change in weights: o Learning rate o Rate of change of error w.r.t rate of change of weight  Gradient: rate of change of error w.r.t rate of change of ‘N’  Prior output (O1 and O2) 0 Input Output 1

Gradient in Detail Gradient : Rate of change of error w.r.t rate of change in net input to neuron o For output neurons  Slope of the transfer function × error o For hidden neurons : A bit complicated ! : error fed back in terms of gradient of successive neurons  Slope of the transfer function × [Σ (gradient of next neuron × weight connecting the neuron to the next neuron)]  Why summation? Share the responsibility!! o Therefore: Credit Assignment Problem

An Example Error = = 0.34 Error = = G1=0.66×(1-0.66)×(-0.66)= G1=0.66×(1-0.66)×(0.34)= Reduce more Increase less

Improving performance Changing the number of layers and number of neurons in each layer. Variation in Transfer functions. Changing the learning rate. Training for longer times. Type of pre-processing and post- processing.

Applications Used in complex function approximations, feature extraction & classification, and optimization & control problems Applicability in all areas of science and technology.