CSE (c) S. Tanimoto, 2001 Neural Networks

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Advertisements

G53MLE | Machine Learning | Dr Guoping Qiu
Perceptron Learning Rule
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Neural Networks. What are they Models of the human brain used for computational purposes Brain is made up of many interconnected neurons.
The Perceptron CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 The Perceptron – Basics Simplest and one.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Chapter 2 Single Layer Feedforward Networks
Artificial neural networks:
Machine Learning Neural Networks.
Joost N. Kok Universiteit Leiden
Neural Networks CS 446 Machine Learning.
Artificial Neural Networks
CSSE463: Image Recognition Day 17
Perceptrons for Dummies
Data Mining with Neural Networks (HK: Chapter 7.5)
ECE 471/571 - Lecture 17 Back Propagation.
Classification Neural Networks 1
CS623: Introduction to Computing with Neural Nets (lecture-2)
Chapter 3. Artificial Neural Networks - Introduction -
CSE (c) S. Tanimoto, 2004 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
Neural Networks Chapter 5
Chapter 9: Supervised Learning Neural Networks
Perceptrons Introduced in1957 by Rosenblatt
CSSE463: Image Recognition Day 17
G5AIAI Introduction to AI
Artificial Intelligence Lecture No. 28
Artificial Intelligence Chapter 3 Neural Networks
CSSE463: Image Recognition Day 17
Machine Learning: Lecture 4
CSE (c) S. Tanimoto, 2002 Neural Networks
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
Chapter - 3 Single Layer Percetron
Artificial Intelligence Chapter 3 Neural Networks
CSSE463: Image Recognition Day 17
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
Seminar on Machine Learning Rada Mihalcea
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
CSE (c) S. Tanimoto, 2007 Neural Nets
David Kauchak CS158 – Spring 2019

CS621: Artificial Intelligence Lecture 14: perceptron training
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks Outline: The biological neuron History of neural networks research The Perceptron Examples Training algorithm Fundamental training theorem Two-level perceptrons CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks The Biological Neuron The human brain contains approximately 1011 neurons. Activation process: Inputs are transmitted electrochemically across the input synapses Input potentials are summed. If the potential reaches a threshold, a pulse or action potential moves down the axon. (The neuron has “fired”.) The pulse is distributed at the axonal arborization to the input synapses of other neurons. After firing, there is a refractory period of inactivity. CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

History of Neural Networks Research 1943 McCulloch & Pitts model of neuron. ni(t+1) =  ( j wij nj(t) - i), (x) = 1 if x  0; 0, otherwise. 1962 Frank Rosenblatt’s book gives a training algorithm for finding the weights wij from examples. 1969 Marvin Minsky and Seymour Papert publish Perceptrons, and prove that 1-layer perceptrons are incapable of computing image connectedness. 1974-89, 1982: Associated content-addressable memory. Backpropagation: Werbos 1974, Parker 1985, Rumelhart, Hinton, & Williams 1986. CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks The Perceptron w1 x1 w2 x2  y . . . wn  xn thresholding output weights inputs summation y = 1 if  wi xi  ; 0, otherwise. CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Examples: Boolean AND and OR. 1 x2  y = x1  x2  ...  xk . . . 1 xk  = k - 1/2 1 x1 1 x2  y = x1  x2  ...  xk . . . 1 xk  = 1/2 1 xi {0, 1} CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Examples: Boolean NOT  y = x -1  = - 1/2 xi {0, 1} CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Example: Template Matching xi {-1, 1} -1 -1 1 -1 -1 -1 1 -1 1 -1  = 25 -  1 1 1 1 1 1 -1 -1 -1 1 Recognizes the letter A provided the exact pattern is present. 1 -1 -1 -1 1 weights w1 through w25 CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Training Sets Let X = X+ U X- be the set of training examples. SX = X1, X2, ..., Xk, ... is a training sequence on X, provided: (1) Each Xk is a member of X, and (2) Each element of X occurs infinitely often in SX. An element e occurs infinitely often in a sequence z = z1, z2, ... provided that for any nonzero integer i, there exists a nonnegative integer j such that there is an occurrence of e in zi, zi+1, ..., zj. CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Training Algorithm Let X = X+ U X- be the set of training examples. and let SX = X1, X2, ..., Xk, ... be a training sequence on X. Let wk be the weight vector at step k. Choose w0 arbitrarily. For example. w0 = (0, 0, ..., 0). Each each step k, k = 0, 1, 2, . . . Classify Xk using wk. If Xk is correctly classified, take wk+1 = wk. If Xk is in X- but misclassified, take wk+1 = wk - ck Xk. If Xk is in X+ but misclassified, take wk+1 = wk + ck Xk. The sequence ck should be chosen according to the data. Overly large constant values can lead to oscillation during training. Values that are too small will increase training time. However, ck = c0/k will work for any positive c0. CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Perceptron Limitations Perceptron training always converges if the training data X+ and X- are linearly separable sets. The boolean function XOR (exclusive or) is not linearly separable. (Its positive and negative instances cannot be separated by a line or hyperplane.) It cannot be computed by a single-layer perceptron. It cannot be learned by a single-layer perceptron. X+ = { (0, 1), (1, 0) } X- = { (0, 0), (1, 1) } X = X+ U X- x2 x1 CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks

Two-Layer Perceptrons +1 x1  +1 -1 y = XOR(x1, x2)  = 0.5  -1 +1 +1  = 0.5 x2   = 0.5 CSE 415 -- (c) S. Tanimoto, 2001 Neural Networks