November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Computer Vision Lecture 18: Object Recognition II
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Neural Networks Lecture 17: Self-Organizing Maps
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Appendix B: An Example of Back-propagation algorithm
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 21 Oct 28, 2005 Nanjing University of Science & Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
November 26, 2013Computer Vision Lecture 15: Object Recognition III 1 Backpropagation Network Structure Perceptrons (and many other classifiers) can only.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Multi-Layer Perceptron
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
November 20, 2014Computer Vision Lecture 19: Object Recognition III 1 Linear Separability So by varying the weights and the threshold, we can realize any.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Neural Networks 2nd Edition Simon Haykin
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Supervised Learning in ANNs
One-layer neural networks Approximation problems
Ranga Rodrigo February 8, 2014
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Artificial Neural Network & Backpropagation Algorithm
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Capabilities of Threshold Neurons
The Naïve Bayes (NB) Classifier
Computer Vision Lecture 19: Object Recognition III
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation Network (BPN) Supervised Learning in the BPN The Self-Organizing Map (SOM) Unsupervised Learning in the SOM

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 2 The Backpropagation Network The backpropagation network (BPN) is the most popular type of ANN for applications such as classification or function approximation. Like other networks using supervised learning, the BPN is not biologically plausible. The structure of the network is as follows: Three layers of neurons, Three layers of neurons, Only feedforward processing: input layer  hidden layer  output layer, Only feedforward processing: input layer  hidden layer  output layer, Sigmoid activation functions Sigmoid activation functions

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 3 neuron i An Artificial Neuron o1o1 o2o2 onon … w i1 w i2 … w in oioi net input signal synapses output signal

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 4 Sigmoidal Neurons The output (spike frequency) of every neuron is simulated as a value between 0 (no spikes) and 1 (maximum frequency) o i (t) o i (t) net i (t)  = 1  = 0.1

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 5 The Backpropagation Network BPN structure: input vector x I1I1I1I1 output vector o /desired output vector y I2I2I2I2 IIIIIIII H1H1H1H1 H2H2H2H2 H3H3H3H3 HJHJHJHJ O1O1O1O1 OKOKOKOK … … …

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 6 Supervised Learning in the BPN In supervised learning, we train an ANN with a set of vector pairs, so-called exemplars. Each pair (x, y) consists of an input vector x and a corresponding output vector y. Whenever the network receives input x, we would like it to provide output y. The exemplars thus describe the function that we want to “teach” our network. Besides learning the exemplars, we would like our network to generalize, that is, give plausible output for inputs that the network had not been trained with.

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 7 Supervised Learning in the BPN Before the learning process starts, all weights (synapses) in the network are initialized with pseudorandom numbers. We also have to provide a set of training patterns (exemplars). They can be described as a set of ordered vector pairs {(x 1, y 1 ), (x 2, y 2 ), …, (x P, y P )}. Then we can start the backpropagation learning algorithm. This algorithm iteratively minimizes the network’s error by finding the gradient of the error surface in weight-space and adjusting the weights in the opposite direction (gradient-descent technique).

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 8 Supervised Learning in the BPN Gradient-descent example: Finding the absolute minimum of a one-dimensional error function f(x): f(x)x x0x0x0x0 slope: f’(x 0 ) x 1 = x 0 -  f’(x 0 ) Repeat this iteratively until for some x i, f’(x i ) is sufficiently close to 0.

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 9 Supervised Learning in the BPN Gradients of two-dimensional functions: The two-dimensional function in the left diagram is represented by contour lines in the right diagram, where arrows indicate the gradient of the function at different locations. Obviously, the gradient is always pointing in the direction of the steepest increase of the function. In order to find the function’s minimum, we should always move against the gradient.

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 10 Supervised Learning in the BPN In the BPN, learning is performed as follows: 1.Randomly select a vector pair (x p, y p ) from the training set and call it (x, y). 2.Use x as input to the BPN and successively compute the outputs of all neurons in the network (bottom-up) until you get the network output o. 3.Compute the error of the network, i.e., the difference between the desired output y and the actual output o. 4.Apply the backpropagation learning rule to update the weights in the network so that its output o for input x is closer to the desired output y.

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 11 Supervised Learning in the BPN Repeat steps 1 to 4 for all vector pairs in the training set; this is called a training epoch. Run as many epochs as required to reduce the network error E to fall below a threshold that you set beforehand.

November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 12 Supervised Learning in the BPN Now our BPN is ready to go! If we choose the type and number of neurons in our network appropriately, after training the network should show the following behavior: If we input any of the training vectors, the network should yield the expected output vector (with some margin of error). If we input any of the training vectors, the network should yield the expected output vector (with some margin of error). If we input a vector that the network has never “seen” before, it should be able to generalize and yield a plausible output vector based on its knowledge about similar input vectors. If we input a vector that the network has never “seen” before, it should be able to generalize and yield a plausible output vector based on its knowledge about similar input vectors.