Lecture 6, CS5671 Neural Networks Introduction –Biological neurons –Artificial neurons –Concepts –Conventions Single Layer Perceptron –Example –Limitation.

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Artificial Neural Networks (1)
Perceptron Learning Rule
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Intelligent Environments1 Computer Science and Engineering University of Texas at Arlington.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Perceptron Learning Rule
An Illustrative Example
Before we start ADALINE
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Neural Networks An Introduction.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neuron Model and Network Architecture
Radial Basis Function Networks
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
1 st Neural Network: AND function Threshold(Y) = 2 X1 Y X Y.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Multiple-Layer Networks and Backpropagation Algorithms
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
ANNs (Artificial Neural Networks). THE PERCEPTRON.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Multi-Layer Perceptron
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
ADALINE (ADAptive LInear NEuron) Network and
CS621 : Artificial Intelligence
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Lecture 8, CS5671 Neural Network Concepts Weight Matrix vs. NN MLP Network Architectures Overfitting Parameter Reduction Measures of Performance Sequence.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Perceptrons Michael J. Watts
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Multiple-Layer Networks and Backpropagation Algorithms
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Biological Neuron Cell body Dendrites Axon Impulses Synapses
Artificial Neural Network & Backpropagation Algorithm
Neuro-Computing Lecture 4 Radial Basis Function Network
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Multilayer Perceptron & Backpropagation
Artificial Neural Networks
CS 621 Artificial Intelligence Lecture 29 – 22/10/05
David Kauchak CS158 – Spring 2019
Perceptron Learning Rule
Perceptron Learning Rule
Perceptron Learning Rule
Presentation transcript:

Lecture 6, CS5671 Neural Networks Introduction –Biological neurons –Artificial neurons –Concepts –Conventions Single Layer Perceptron –Example –Limitation

Lecture 6, CS5672 Biological neuron Neuron = Cell superclass in nervous system Specs –Total number = ~10 11 (Size of hard disk circa ’03) Maximum number before birth 10 4 lost/day (More if you don’t study everyday!) – Connections/neuron = ~10 4 –Signal Rate = ~10 3 Hz (Cpu = 10 9 Hz circa ’03) –Signal Propagation Velocity = 10 (-1 to 2) /sec –Power = 40W

Lecture 6, CS5673 Biological Neuron Connectivity important (Just like human society) –Connected To what and To what extent –Basis of memory and learning (revising opinions; learning lessons in life) –Revision important (And why reading for the first time on eve of exam is a flawed strategy) –Covering eye to prevent loss of vision in squint (Why advertising industry persists, subliminally or blatantly)

Lecture 6, CS5674 Artificial Neural Networks What –Connected units with inputs and outputs Why –Can “learn” and approximate any function, including non-linear functions (XOR) When –Basic idea more than 60 years old –Resurgence of interest once coverage extended to non-linear problems

Lecture 6, CS5675 Concepts Trial –Output = Verdict = Guilty/Not guilty –Processing neurons = Jury members –Output neuron = Jury Foreman –Inputs = Witnesses/Lawyers –Weights = Credibility of Witnesses/Lawyers Investment –Output decision = Buy/Sell –Inputs = Financial advisors –Weights = Past reliability of advice –Iterate = Revise weights after results

Lecture 6, CS5676 Concepts Types of learning –Supervised NN learns from a series of labeled examples (human propagation of prejudice) Distinction between training and prediction phases –Unsupervised NN discovers clusters and classifies examples Also called self-organizing networks (human tendency) Typically, prediction rules cannot be derived from an NN

Lecture 6, CS5677 Conventions p1 p2 p3 pN 1h1 1h2 2h1 2h2 1hM2hP o1 o2 oK (Input)( Hidden )(Output) LAYERS w 1,1 w M,N w 1,2

Lecture 6, CS5678 Conventions Generally, rich connectivity between, but not within layers Output for any neuron = Transfer/Activation function f(x) = f(WP + b) where W = Weight Matrix [w 1,1 w 1,2 w 1,3 …. w 1,N ] P = Input Matrix WP = Matrix product = [w 1,1 p1+w 1,2 p2+w 1,3 p3... +w 1,N pN] b = Bias/Offset p1 p2 pN

Lecture 6, CS5679 Activation Functions Hard limit: f(x) = [0/1]. If x < 0, f(x) = 0, else 1 Symmetric hard limit: f(x) = [-1/1]. If x < 0, f(x) = -1, else 1 Linear: f(x) = x Positive linear: f(x) = [0,x]. If x < 0, f(x) = 0, else x Saturating linear: f(x) = [0,1]. If x 1, then 1, else x Symmetric Saturating linear: f(x) = [-1,1]. If x 1, then 1, else x Log-sigmoid: f(x) = 1/(1+e -x ) Competitive (multiple neuron layer; winner takes all): f(x i ) = 1 | x i > (not x i ); f(not x i ) = 0;

Lecture 6, CS56710 Conventions Output for any layer = column matrix = [ f(W 1 P + b 1 ) f(W 2 P + b 2 ). f(W M P + b M )] where W i = Weight Matrix [w i,1 w i,2 w i,3 …. w 1,N ]

Lecture 6, CS56711 Single Layer Perceptron Single Layer Single Neuron Perceptron –Consider multiple inputs (column vector) with respective weights (row vector) to a neuron that serves as the output neuron –Assume f(x) is the hard limit function –Labeled training examples are provided {(P1,t1), (P2,t2) …. (PZ,tZ)}, where each t i is 0 or 1. –Learning rule (NOT the same as prediction rule) Error e = Target - f(x) For each input set W current = W previous + eP b current = b previous + e Iterate till e is zero for all training examples

Lecture 6, CS56712 Single Layer Perceptron Single Layer Multiple Neuron Perceptron –Consider multiple inputs (column vector) with respective weights (row vector) to a layer of several neurons that serve as the output –Assume f(x) is the hard limit function –Labeled training examples are provided {(P1,t1), (P2,t2) …. (PZ,tZ)}, where each t i is a column vector consisting of 0s and/or 1s. –Learning rule (NOT the same as prediction rule; use vectors for the error and bias) Error E = Target - f(x) For each input set W current = W previous + EP B current = B previous + E Iterate till E is zero for all training examples