Perceptrons Michael J. Watts

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Perceptron Lecture 4.
Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Biological and Artificial Neurons Michael J. Watts
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
The Perceptron CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 The Perceptron – Basics Simplest and one.
Biological neuron artificial neuron.
An Illustrative Example
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar.
Artificial Neural Networks
Multi-Layer Perceptrons Michael J. Watts
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Applying Neural Networks Michael J. Watts
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Programming for Geographical Information Analysis: Advanced Skills Online mini-lecture: Introduction to Neural Nets Dr Andy Evans.
Linear Classification with Perceptrons
CS-424 Gregory Dudek Today’s Lecture Neural networks –Training Backpropagation of error (backprop) –Example –Radial basis functions.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Chapter 2 Single Layer Feedforward Networks
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
EEE502 Pattern Recognition
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Announcements 1. Textbook will be on reserve at library 2. Topic schedule change; modified reading assignment: This week: Linear discrimination, evaluating.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Neural Networks.
Chapter 2 Single Layer Feedforward Networks
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks Chapter 5
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
David Kauchak CS158 – Spring 2019

Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Perceptrons Michael J. Watts

Lecture Outline ANN revision Perceptron architecture Perceptron learning Problems with perceptrons

ANN Revision Artificial neurons o mathematical abstractions of biological neurons o three functions associated with each neuron  input  activation  output o McCulloch and Pitts neuron the first to be designed

ANN Revision Artificial Neural Networks o Collections of connected artificial neurons o Each connection has a weighting value attached to it o Weight can be positive or negative o Usually layered o Signals are propagated from one layer to another

Perceptron Architecture An ANN using McCulloch and Pitts neurons Proposed by Rosenblatt in 1958 Developed to study theories about the visual system Layered architecture

Perceptron Architecture Feedforward o signals travel only forward o no recurrent connections Inputs are continuous Outputs are binary Used for classification

Perceptron Architecture Three neuron layers o first is a buffer, usually ignored o second is the input or "feature" layer o third is the output or "perceptron" layer

Perceptron Architecture Buffer neurons are arbitrarily connected to feature neurons o usually one-to-one  each buffer neuron is connected only to the corresponding feature neuron o can combine multiple buffer values into one feature Connection weight values from buffer to feature layer are fixed

Perceptron Architecture Feature neurons are fully connected to perceptron neurons o Each feature neuron is connected to every perceptron neuron Weights from feature to perceptron layer are variable One modifiable connection layer o single layer network

Perceptron Architecture

Threshold neurons are used in the perceptron layer Threshold can be a step function or a linear threshold function Biases can be used to create the effect of a threshold

Perceptron Architecture Perceptron recall o propagate a signal from buffer to perceptron layer o signal elements are multiplied by the connection weight at each layer o output is the activation of the output (perceptron) neurons

Perceptron Learning Supervised learning algorithm Minimises classification errors

Perceptron Learning 1. propagate an input vector through the perceptron 2. calculate the error for each output neuron by subtracting the actual output from the target output: 3. modify each weight 4. repeat steps 1-3 until the perceptron converges

Perceptron Learning Other error measures used Widrow-Huff learning rule Used in a system call ADALINE o ADAptive LINEar neuron

Problems with Perceptrons Minsky and Papert (1969) o Mathematically prove limitations of perceptrons o Caused most AI researchers of the time to return to symbolic AI The “Linear Separability” problem o Perceptrons can only distinguish between classes separated by a plane

Linear Separability

XOR problem is not linearly separable Truth table

Linear Separability

This problem killed research into ANN ANN field was moribund for >10 years Reasons for this problem o Hyperplanes

Linear Separability A hyperplane is a plane of >3 dimensions o in 2 dimensions it is a line Each output neuron in a perceptron corresponds to one hyperplane Dimensionality of the hyperplane is determined by the number of incoming connections

Linear Separability The position of the hyperplane depends on the weight values Without a threshold the hyperplane will pass through the origin o causes problems with learning Threshold / bias allows the hyperplane to be positioned away from the origin

Linear Separability Learning involves positioning the hyperplane If the hyperplane cannot be positioned to separate all training examples, then the algorithm will never converge o never reach 0 errors

Advantages of Perceptrons Simple models Efficient Guaranteed to converge over linearly separable problems Easy to analyse Well known

Disadvantages of Perceptrons Cannot model non-linearly separable problems Some problems may require a bias or threshold parameter Not taken seriously anymore

Summary Perceptrons are the oldest neural network One of the first uses of the McCulloch and Pitts neuron Supervised learning algorithm Still useful for some problems Cannot handle non-linearly separable problems