McCulloch–Pitts Neuronal Model :

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
NEURAL NETWORKS Perceptron
Support Vector Machines
Support vector machine
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
Optimal Adaptation for Statistical Classifiers Xiao Li.
Neural Networks An Introduction.
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
嵌入式視覺 Pattern Recognition for Embedded Vision Template matching Statistical / Structural Pattern Recognition Neural networks.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
This week: overview on pattern recognition (related to machine learning)
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 14 Oct 14, 2005 Nanjing University of Science & Technology.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Non-Bayes classifiers. Linear discriminants, neural networks.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Chapter 2 Single Layer Feedforward Networks
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds.
Start with student evals. What function does perceptron #4 represent?
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Neural networks.
Artificial Neural Networks
National Taiwan University
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
Chapter 2 Single Layer Feedforward Networks
第 3 章 神经网络.
Wed June 12 Goals of today’s lecture. Learning Mechanisms
Ch 2. Concept Map ⊂ ⊂ Single Layer Perceptron = McCulloch – Pitts Type Learning starts in Ch 2 Architecture, Learning Adaline : Linear Learning.
Chapter 3. Artificial Neural Networks - Introduction -
ECE 471/571 – Lecture 12 Perceptron.
Neuro-Computing Lecture 4 Radial Basis Function Network
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
Neural Networks Chapter 5
Neural Network - 2 Mayank Vatsa
Perceptrons Introduced in1957 by Rosenblatt
CSSE463: Image Recognition Day 17
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
CSSE463: Image Recognition Day 17
Artificial Intelligence Chapter 3 Neural Networks
Chapter - 3 Single Layer Percetron
Artificial Intelligence Chapter 3 Neural Networks
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
Artificial Neural Networks
The McCullough-Pitts Neuron
A task of induction to find patterns
PYTHON Deep Learning Prof. Muhammad Saeed.
Artificial Intelligence Chapter 3 Neural Networks
Artificial Neural Network
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

McCulloch–Pitts Neuronal Model : Concept Map Pattern Recognition : Feature Representation x Decision Function d(x) Geometric Interpretation : Decision Boundary (Surface) Neural Network Conventional : Statistical Formulation - Bayes : Optimal Syntactic Approach McCulloch–Pitts Neuronal Model : Threshold Logic Sigmoid Neuronal Model : Chap 3 Single Layer Network Multilayer Network Digital Logic → Can represent any region in space Geometric interpretation

Chapter 1. Pattern Recognition and Neural Networks Two Objectives Class Recognition : Image -> Apple Attribute Recognition – Image -> Color, Shape, Taste, etc. Ex. Color (attribute) of apple is red. (1) Approaches Statistical a. Template Matching P ( ω x ) b. Statistical 2 P ( ω x ) c. Syntactic 1 d. Neural Network x Class 1 Class 2 Bayes Optimal Decision Boundary Ex. x = temperature, ω1: healthy, ω2 : sick. x = (height, weight), ω1: female, ω2 : male.

Training Data = Labelled Input/Output Data (2) Procedure - Train and Generalize Raw x d ( x ) Preprocessing Feature Extraction Discriminant Decision making Class Data Eliminate bad data (outliers) Filter out noise For data reduction, better separation Training Data = Labelled Input/Output Data = { x | d(x) is known }

x1 = temp, x2 = pressure decision boundary = ( n-1) dim . = line, (3) Decision ( Discriminant ) Function a. 2-Class Weather Forecast Problem n = 2, M = 2 x 2 w 2 w d ( x ) = decision boundary = 1 = ( n-1) dim . = line, plane, hyperplane. x1 = temp, x2 = pressure x 1  unnormalized  normalized w1 w2 w3

is a unit normal to the Hyperplane. In general, x x w T w D D > x w T D < x w T T < x w w T x = D is a unit normal to the Hyperplane. w T x =

Pairwise Separable - Discriminants Req. b. Case of M = 3 , n = 2 – requires 3 discriminants Pairwise Separable - Discriminants Req. +   -

Linear classifier Machine Max   Linear classifier Machine

d 3 w w 1 w w 3 3 1 IR d2 w d = 2 23 d = d w 1 1 2

j 2. PR – Neural Network Representation (1) Models of a Neuron A. McCulloch-Pitts Neuron – threshold logic with fixed weights 1 x x w 1 1 -q = bias 1 x w 2 j x S 2 y = j ( u` ) 2 M u q M x w p p Adaptive Linear Combiner Nonlinear Activation Function x p (Adaline)

? B. Generalized Model w w -w -w Half Plane Detector x bias q - x x 1 1 q - w 2 + x 2 x -w bias 1 1 q ? -w 2 x 2 j j j One-Sided Ramp Logistic Hard Limiter, Threshold Logic Binary Piecewise Linear j j Signum, Threshold Logic Ramp j tanh Two-Sided Bipolar

(2) Boolean Function Representation Examples x , x = binary (0, 1) x 1 2 x 1 1 1.5 AND 0.5 OR x x 2 2 x -1 x 1 -1 1 -1.5 NAND -1 -0.5` NOR -1 x x 2 2 1 Excitatory 1 -1 x -0.5 INVERTER 1 0.5 MEMORY -1 Inhibitory Cf. x , x may be bipolar ( -1, 1) → Different Bias will be needed above. 1 2

(3) Geometrical Interpretation A. Single Layer with Single Output = Fire within a Half Plane for a 2-Class Case B. Single Layer Multiple Outputs – for a Multi-Class Case w w w 1 2 3 1 0 0 1 x 0 1 0 2 x 0 0 1

Other weights needed for Binary Representation. C. Multilayer with Single Output - XOR Linearly Nonseparable Nonlinearly Separable → Bipolar Inputs, Other weights needed for Binary Representation. x OFF 1 3 ON 1 x 2 x x -5 1 2

a. Successive Transforms 1 x 2 XOR -1.5 0.5 1.5 - 1 AND - 1 OR NAND x 0.5 1 1 0.5 XOR 1 x 0.5 2 - 1 OR x x j ( x x ) x - 2 1 x =0.5 x + - 1 - 2 1 2 2 ) ( 2 1 x + - j x x x - + 1 1 2 2 1 x =0.5 x -

x b. XOR = OR ─ (1,1) AND x c. Parity x x 1-bit parity x x = XOR x 1 1 -2 1.5 0.5 1 x 1 2 c. Parity 1 x 0.5 x 0.5 1 1 1 0.5 1-bit parity 1 1 1 x 1.5 -1 x 1.5 2 1 = XOR 2 -1 0.5 n-bit parity (-1) n+1 x n-0.5 n

D. Multilayer with Single Output – Analog Inputs (1/2) OR OR 2 2 2 2 3 3 1 1 AND 2 2 3 3

E. Multilayer Single Output – Analog Inputs – (2/2) 1 2 5 AND 1 2 4 3 6 5 OR 4 3 6 1 1 2 AND 2 3 3 OR 4 4 5 5 AND 6 6

F.

MLP Decision Boundaries XOR Interwined General 1-layer: Half planes A B 2-layer: Convex A B 3-layer: Arbitrary 2019-02-24

Transform of NN from ① to ② : See how the weights are changed. Exercise : Transform of NN from ① to ② : See how the weights are changed. 1 ① 2 ② 2 1 ① ② 1 2 3 1 1 ① 2 ②

Questions from Students -05 How to learn the weights ? Any analytic and systematic methodology to find the weights ? Why do we use polygons to represent the active regions [1 output] ? Why should di(x) be the maximum for the Class i ?