Lecture 2. Basic Neurons To model neurons we have to idealize them:

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks 2. Overview  The McCulloch-Pitts neuron  Pattern space  Limitations  Learning.
Advertisements

Radial Basis Functions
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #30 4/15/02 Neural Networks.
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Classification Part 3: Artificial Neural Networks
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Neural Networks and Deep Learning Slides credit: Geoffrey Hinton and Yann LeCun.
CHAPTER 10: Logistic Regression. Binary classification Two classes Y = {0,1} Goal is to learn how to correctly classify the input into one of these two.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Multinomial Regression and the Softmax Activation Function Gary Cottrell.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Back Propagation and Representation in PDP Networks
Today’s Lecture Neural networks Training
Lecture 7 Learned feedforward visual processing
Neural networks and support vector machines
Multiple-Layer Networks and Backpropagation Algorithms
Back Propagation and Representation in PDP Networks
Digital-to-Analog Analog-to-Digital
Deep Feedforward Networks
Artificial Neural Networks I
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Classification: Logistic Regression
Real Neurons Cell structures Cell body Dendrites Axon
Derivative of an Exponential
CSC321: Neural Networks Lecture 19: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
slides created by Leon Barrett
CSE 473 Introduction to Artificial Intelligence Neural Networks
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Neural Networks CS 446 Machine Learning.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Neural Networks and Backpropagation
CSC 578 Neural Networks and Deep Learning
Machine Learning Today: Reading: Maria Florina Balcan
CSC 578 Neural Networks and Deep Learning
ECE 471/571 - Lecture 17 Back Propagation.
Goodfellow: Chap 6 Deep Feedforward Networks
Gradient Checks for ANN
ECE 599/692 – Deep Learning Lecture 5 – CNN: The Representative Power
Perceptron as one Type of Linear Discriminants
CS623: Introduction to Computing with Neural Nets (lecture-4)
Multilayer Perceptron & Backpropagation
Logistic Regression.
Neural Networks Geoff Hulten.
Capabilities of Threshold Neurons
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
McCulloch–Pitts Neuronal Model :
4.3 – Differentiation of Exponential and Logarithmic Functions
Done Source:
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Artificial neurons Nisheeth 10th January 2019.
Perceptron Algorithm.
Back Propagation and Representation in PDP Networks
Image Classification & Training of Neural Networks
Backpropagation and Neural Nets
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
CSC 578 Neural Networks and Deep Learning
Principles of Back-Propagation
Outline Announcement Neural networks Perceptrons - continued
Overall Introduction for the Lecture
Presentation transcript:

Lecture 2. Basic Neurons To model neurons we have to idealize them: Idealization removes complicated details that are not essential for understanding the main principles. It allows us to apply mathematics and to make analogies to other familiar systems Once we understand the basic principles, its easy to add complexity to make the model more faithful.

Linear neurons y = b + Σi xi wi These are the basic building parts for all other neuron networks. bias i-th input y = b + Σi xi wi output Weight on i-th input

Binary threshold neuron McCulloch-Pitts (1943) First compute a weighted sum of inputs Then send out a fixed size spike of activity if the weighted sum exceeds a threshold. McCulloch and Pitts thought that each spike is like the truth value of a proposition and each neuron combines truth values to compute the truth value of another proposition. This has influenced Von Neumann.

There are two equivalent ways to describe a binary threshold neuron θ

Rectified Linear Unit (ReLU) They compute a linear weighted sum of their inputs. The output is a non-linear function of the total input. This is the most popularly used neuron. Or written as: f(x) = max {0,x} A smooth approximation of the ReLU is “softplus” function f(x) = ln (1+ex)

Sigmoid neurons Typically they use the logistic function They have nice derivatives which makes learning easy. But they cause vanishing gradients during backpropogation.

Stochastic binary neurons

Softmax function (Normalized exponential function) If we take an input of [1,2,3,4,1,2,3], the softmax of that is [0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175]. The softmax function highlights the largest values and suppress other values. Comparing to “max” function, softmax is differentiable.