Neural Networks (II) Simple Learning Rule

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Introduction to Artificial Intelligence (G51IAI)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Overview over different methods – Supervised Learning
Simple Neural Nets For Pattern Classification
A Review: Architecture
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
An Illustrative Example
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
Neural Network Computing Lecture no.1. All rights reserved L. Manevitz Lecture 12 McCullogh-Pitts Neuron The activation of a McCullogh-Pitts Neuron is.
Data Mining with Neural Networks (HK: Chapter 7.5)
CS 4700: Foundations of Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Pulsed Neural Networks Neil E. Cotter ECE Department University of Utah.
Multi-Layer Perceptrons Michael J. Watts
Chapter 9 Neural Network.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
1 Pattern Classification X. 2 Content General Method K Nearest Neighbors Decision Trees Nerual Networks.
Linear Discrimination Reading: Chapter 2 of textbook.
CS621 : Artificial Intelligence
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
EEE502 Pattern Recognition
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Other Classification Models: Neural Network
Ranga Rodrigo February 8, 2014
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Neural Networks Chapter 5
Artificial Intelligence Lecture No. 28
The Naïve Bayes (NB) Classifier
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Perceptron Learning Rule
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural Networks (II) Simple Learning Rule Artificial Intelligence Lecture Note Information & Communication University 1999 Spring Sun Hwa Hahn

Threshold Logic Unit An artificial neuron model the functionality of a neuron Threshold function = f(activation) Activation Threshold Function : Hard limiter, Sigmoid, Stochastic Semi-linear

Threshold Function Hard limiter Sigmoid function y a  y 1.0 0.5 a 

Threshold Function Stochastic semi-linear unit Output is the probability of outputting ‘1’ N : Time Slots, N1 : pulses Probability of pulses is given by 0  P() Probability of firing if activation is a is the probability that the threshold is less than a

Geometric interpretation of TLU Input space and pattern classification n inputs  n-dimensional input space each point in the input space represents input-output mapping classification of the outputs are done by decision line (plane, or hyper plane) X1 X2 1 X1 1 X2 Activation 2 Output w1 = w2 = 1

Linear Separation of Classes Critical condition for classification occurs X2 1.5 Decision line 1 X1 1.5

Vectors v Quantities that have magnitude and direction (||v||, ) or (v1, v2, … , vn)  ||v|| v v1 X1 v2 X2

Comparing Vectors Inner Product For n-dimensional vectors v  w Vector Projection of w along v v  w vw Vector Projection of v along w

Inner Product 두 Vector간 각도가 유사할수록 내적의 값이 크다. Cooperative vectors v v v w w w

Inner Product and TLUs     w x w w w x x x xw xw xw Activation is greater than threshold Xw < /||w|| Activation is less than threshold

Training TLUs Training : adjust weight vector and threshold for desired classification Augmented weight vector : consider threshold as an input permanently connected with weight -1

Pattern Space : 2D vs. 3D

Changing Vectors Vector Addition : Vector Subtraction Addition of negative vector

Adjusting Weight Vector : Learning Weight vector : orthogonal to the decision plane Decision plane must pass through the origin

Training Set Training set {v, t} consists of input vector and target class of the input vector Supervised training, supervised learning Misclassification : for given network, output is different to target value Rotate weight vector per each misclassification misclassification of 1 - 0 :  > 90 misclassification of 0 - 1 :  < 90  : learning rate, 0 <  < 1 w ‘ = w + v

Learning : Misclassification 1 - 0 Activation is negative when it should have been positive rotate the weight vector toward the input vector

Learning : Misclassification 0 - 1 Activation is positive when it should have been negative rotate the weight vector against the input vector

Learning Rule Training Rule or Learning Rule  : learning rate Training algorithm for TLU : Perceptron learning algorithm Repeat for each training vector (v, t) evaluate the output y when v is input to TLU if y  t then form a new weight vector w’ end for until y = t for all vectors

Example Initial weight (0, 0.4, 0.3) Learn logical AND, learning rate = 0.25

Perceptron

Example Initial weight (0, 0.4, 0.3) Learn logical OR, learning rate = 0.25