CS532 Neural Networks Dr. Anwar Majid Mirza Lecture No. 3 Week2, January 22 nd, 2008 National University of Computer and Emerging.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Multi-Layer Perceptron (MLP)
There are two basic categories: There are two basic categories: 1. Feed-forward Neural Networks These are the nets in which the signals.
Slides from: Doug Gray, David Poole
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Simple Neural Nets For Pattern Classification
A Review: Architecture
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
The Perceptron CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 The Perceptron – Basics Simplest and one.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Chapter 6: Multilayer Neural Networks
Before we start ADALINE
Data Mining with Neural Networks (HK: Chapter 7.5)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.
SVM by Sequential Minimal Optimization (SMO)
Multiple-Layer Networks and Backpropagation Algorithms
Chapter 9 Neural Network.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 20 Oct 26, 2005 Nanjing University of Science & Technology.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
The Perceptron. Perceptron Pattern Classification One of the purposes that neural networks are used for is pattern classification. Once the neural network.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
Linear Discrimination Reading: Chapter 2 of textbook.
Non-Bayes classifiers. Linear discriminants, neural networks.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 25 Nov 4, 2005 Nanjing University of Science & Technology.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Chapter 2 Single Layer Feedforward Networks
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
CS532 Neural Networks Dr. Anwar Majid Mirza Lecture No. 2 January 17 th, 2008 National University of Computer and Emerging Sciences.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Chapter 2 Single Layer Feedforward Networks
One-layer neural networks Approximation problems
Ranga Rodrigo February 8, 2014
Hebb and Perceptron.
Artificial Neural Network & Backpropagation Algorithm
Neural Networks Chapter 5
Neural Networks.
Artificial Intelligence 12. Two Layer ANNs
The McCullough-Pitts Neuron
CS621: Artificial Intelligence Lecture 18: Feedforward network contd
Pattern Recognition: Statistical and Neural
Presentation transcript:

CS532 Neural Networks Dr. Anwar Majid Mirza Lecture No. 3 Week2, January 22 nd, 2008 National University of Computer and Emerging Sciences NU-FAST, A.K. Brohi Road, H11, Islamabad, Pakistan

A Perceptron to classify letters from different fonts: One Output Class A Perceptron to classify letters from different fonts: Several output classes Modified Perceptron Algorithm Problem with Perceptron Learning

Input from Font 1

Consider the 21 input patterns as shown on the last three slides. Let us first assume that we want to train a simple perceptron to classify each of the above input patterns as belonging, or not belonging to the class A (letters which are very similar to A). In that case, the target value for each pattern is either +1 or -1. There are three examples of A and 18 examples of not-A in last three slides. We could, of course, use the same vectors as examples of B or not-B and train the net in a similar way. Note, however, that because we are using a single layer net, the weights for the output unit signifying A, do not have any interaction with the weights for the output unit signifying B.

Therefore, we can solve these problems at the same time, by allowing a column of weights for each output unit. Our net would have 63 input units and 2 output units. The first output unit would correspond to “A” or “not-A”, the second unit to “B” or “not-B”. Continuing this idea, we can identify 7 output units, one for each of the 7 categories into which we wish to classify out input.

The simple perceptron discussed so far, can be extended easily to the case where the input vectors belong to the one (or more) of several categories to which the input vectors may belong. The architecture of such a net is shown in the figure. X1X1X1X1 X2X2X2X2 llllll XnXnXnXn Y1Y1Y1Y1 Y2Y2Y2Y2 YmYmYmYm llllll w 11 w 21 w 31 w 12 w 32 w 13 w 2m w nm w 22

For this example, each input vector is a 63-tuple representing a letter expressed as a pattern on a 7x9 grid of pixels. There are seven categories to which each input vector may belong, so there are seven components to the output vector, each representing a letter: A, B, C, D, E, K or J. The training input patterns and target responses must be converted to an appropriate form for the neural net to process. A bipolar representation has better computational characteristics than does a binary representation. We would also require a ‘modified training algorithm’ for several output categories. One such algorithm is given on the next slide.

1.Initialize weights, bias and the threshold . Also set the learning rate  such that ( 0 <  <=1). 2.While stopping condition is false, do the following steps. 3.For each training pair s:t, do steps 4 to 6. 4.Set the activations of input units. x i = s i 5.Compute the response y j of each output unit Yj, j=1,…,m. 6.Update weights and bias if an error occurred for this pattern for any of the output unit. If y j is not equal to t j (under some limit) then w ij (new) = w ij (old) +  x i t j for i = 1 to n b j (new) = b j old) +  t j end if 7.Test stopping condition: If no weight changed in step 3, stop, else continue.

1.The major problem with Perceptron learning is that it cannot separate out those classes which are not linearly separable. 2.Thus there is a need to improve upon the existing model and develop a technique which could be useful for a wider range of problems. … Solution to this problem was provided by Widrow’s Delta Rule… x1x1x1x1 x2x2x2x2

1.Modify the Matlab code given in Lecture 2 for Simple Perceptron to solve the classification problem of letters from different fonts (discussed today). A.First with one output class B.Then with several output classes C.Test the resulting nets in both cases D.In your opinion which method (1 or 2) should be used for this classification problem. Justify you answer with arguments. 2.Consult the text books (or internet) and explain: A.What is a Hebb learnig rule? B.How Hebb rule is used to solve classification problems? C.Under what circumstances Hebb learning fails? Why? D.Compare Hebb rule with Perceptron learning.