Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Perceptron Lecture 4.
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Support Vector Machines
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Perceptron.
Machine Learning Neural Networks
Simple Neural Nets For Pattern Classification
A Review: Architecture
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
The Perceptron CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 The Perceptron – Basics Simplest and one.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
An Illustrative Example
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Before we start ADALINE
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Radial Basis Function Networks
Artificial Neural Networks
1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar.
Computer Science and Engineering
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Multi-Layer Perceptrons Michael J. Watts
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 20 Oct 26, 2005 Nanjing University of Science & Technology.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 30: Perceptron training convergence;
Multi-Layer Perceptron
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Chapter 2 Single Layer Feedforward Networks
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21: Perceptron training and convergence.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Ten MC Questions taken from the Text, slides and described in class presentation. COSC 4426 AJ Boulay Julia Johnson.
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Connectionist Modelling Summer School Lecture Three.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Perceptrons Michael J. Watts
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Pattern Associators, Generalization, Processing Psych /719 Feb 6, 2001.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Chapter 2 Single Layer Feedforward Networks
One-layer neural networks Approximation problems
Hebb and Perceptron.
Simple Learning: Hebbian Learning and the Delta Rule
CS621: Artificial Intelligence Lecture 17: Feedforward network (lecture 16 was on Adaptive Hypermedia: Debraj, Kekin and Raunak) Pushpak Bhattacharyya.
Chapter 3. Artificial Neural Networks - Introduction -
Neural Networks Chapter 5
Backpropagation.
Backpropagation.
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Outline Announcement Neural networks Perceptrons - continued
CS621: Artificial Intelligence Lecture 17: Feedforward network (lecture 16 was on Adaptive Hypermedia: Debraj, Kekin and Raunak) Pushpak Bhattacharyya.
Presentation transcript:

Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification

proof ?

Where Perceptrons Fail Perceptrons require linear separability  a hyperplane must exist that can separate positive and negative examples  perceptron weights define this hyperplane

Limitations of Hebbian Learning With Hebb learning rule, input patterns must be orthogonal to one another. If input vector has α elements, then at most α arbitrary associations can be learned.

Limitations of Delta Rule (LMS Algorithm) To guarantee learnability, input patterns must be linearly independent of one another.  Weaker constraint than orthogonality -> LMS is more powerful algorithm than Hebbian learning. What’s the downside of LMS relative to Hebbian learning  If input vector has α elements, then at most α associations can be learned.

Exploiting Linear Dependence For both Hebbian learning and LMS, more than α associations can be learned if one association is a linear combination of the others. Note: x (3) = x (1) + 2 x (2) d (3) = d (1) + 2 d (2) example # x1x1 x2x2 desired output

The Perils Of Linear Interpolation

Hidden Representations Exponential number of hidden units is bad  Large network  Poor generalization With domain knowledge, we could pick an appropriate hidden representation.  E.g., perceptron scheme Alternative: learn hidden representation Problem  Where does training signal come from?  Teacher specifies desired outputs, not desired hidden unit activities.

Challenge: adapt algorithm for the case where the actual output should be ≥ desired output i.e.,

Why Are Nonlinearities Necessary? Prove  A network with a linear hidden layer has no more functionality than a network with no hidden layer (i.e., direct connections from input to output)  For example, a network with a linear hidden layer cannot learn XOR x y z W V