Spiking Neuron Networks

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

Deep Learning Bing-Chen Tsai 1/21.
CS590M 2008 Fall: Paper Presentation
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Artificial Spiking Neural Networks
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Artificial Intelligence (CS 461D)
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
How does the mind process all the information it receives?
Nervous Systems. What’s actually happening when the brain “learns” new information? 3. I’m too old to learn anything new anymore; I hire people do that.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Psychology 304: Brain and Behaviour Lecture 11.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Understanding the Neuron. 2 Internal Messaging Systems 1.Nervous System- fast acting- messages travel through neurons (nerve cells) 2.Endocrine System-
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Unit 3 – Neurobiology and Communication Nerve Cells and Neural Pathways.
Neurons & Nervous Systems. nervous systems connect distant parts of organisms; vary in complexity Figure 44.1.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CSC321: Introduction to Neural Networks and Machine Learning Lecture 19: Learning Restricted Boltzmann Machines Geoffrey Hinton.
8.2 Structures and Processes of the Nervous System
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Dendrites axon cell body myelin. Perkins & Kent (1991)
Introduction to Deep Learning
Neural Networks Si Wu Dept. of Informatics PEV III 5c7 Spring 2008.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
CHS AP Psychology Unit 3: Biological Psychology Essential Task 3-1: Identify the basic parts of the neuron (dendrites, cell body, axon, terminal buttons,
Neuroscience 1. Brain Organization 2. Single Neuron Computation 3. Six Organizational Principles of Neural Computation.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Neurons and neural pathways
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Chapter 48. Role of the Nervous System Sensory Input Integration Motor Output.
Neural networks and support vector machines
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Convolutional Neural Network
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
Dendrites cell body myelin axon.
Artificial Intelligence (CS 370D)
Biological Neural Networks
Nervous System Chapter 48.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Deep Learning Qing LU, Siyuan CAO.
Learning Objectives After this miniclass, you should be able to:
Biological Psychology
Resting Potential, Ionic Concentrations, and Channels
Effects of Excitatory and Inhibitory Potentials on Action Potentials
Neuronal Signals.
Artificial Intelligence Lecture No. 28
Nerve cells neural circuitry and behaviour
Brain Function for Law-Neuro
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
ARTIFICIAL NEURAL networks.
Synaptic Transmission and Integration
Introduction to Neural Networks
Learning Objectives After this class, you should be able to:
EE 193/Comp 150 Computing with Biological Parts
How the Brain Works Today we are going to talk about the brain, which is the organ in our body that controls our thinking, feeling, decision making, movement.
CSC 578 Neural Networks and Deep Learning
How to win big by thinking straight about relatively trivial problems
Presentation transcript:

Spiking Neuron Networks Neural Computation Spiking Neuron Networks

Sources “Handbook of Natural Computing,” Editors Grzegorz Rosenberg, Thomas Back and Joost N. Kok, Springer 2014. “Deep Learning Tutorial”, Yann LeCun, Marc'Aurelio Ranzato, ICML Atlanta, 2013.

Spiking Neuron Networks (SNNs) = Pulse-Coupled Neural Networks (PCNNs) Inspired by natural computation in the brain Accurate modeling of synaptic interactions between neurons Taking into account the time of spike firing Fast adaptation & exponential capacity to memorize How can we design efficient learning rules that take advantage of this computational model? Can we already show in theory for simple neuron models whether SNNs can do more than classical computational models? Do we need very accurate neuron models to improve learning? What design principles (other than the neuron model) are crucial for efficient learning?

Neocortex: Cortex means bark in Greek, it lies as a bark over the rest of the brain with a surface of 2000cm^2. At the back is the occipital area important for visual processing (the later takes up 40% of the brain)  very high visual resolution (& capability for associative and therefore creative thinking?). Frontal area important for short term working memory, and planning & integration of thoughts.

Cerebellum: contains 50% of all neuron, involved in precise timing functions Hippocampus: assumed to work as an association area which processes and associates information from different sources (after removal no new memories are stored)

Glial cells provide support to neurons: suck up the spilt-over neuro-transmitters or provide myelin sheets around axons or neurons Neuron cells: 10^11 in our brains Neurons receive input through synapses on its dendrites; dendritic trees often receive more than 10,000 synapses Neurons communicate using spikes, i.e. brief 1ms excursions from the neuron’s membrane voltage, generated in the axon-hillock from where it propagates along the axon to the (10,000) axon terminals Few hard rules: some release both excitatory and inhibitory neurotransmitters, some are without axons, not all neurons spike etc. Synaptic plasticity: adjustments, formation, removal synapses

Rate vs Spike coding: Poisson-like rate coding cannot explain the rapid information transmission for sensory processing N neurons: Counting spikes  log(N+1) bits Binary code  O(N) bits Timing code  Means alphabet is not binary but is {0,1,…,T-1}: gives O(N*log(T)) bits Rank order coding  order of spikes matters, the time axis becomes somewhat part of the length of the code: log(N!) = O(N*log(N)) bits Etc.

1mm^2 surface contains about 10^6 cells Connectivity in the whole cortex is about 10^6km of fibers

Some neuron models:

The above models is able to accurately predict all 20 firing behaviors, just as the more complex HH model. HH: 1200 floating point computations Izhikevich: 13 floating point computations Other models lose out on predictability

Supervised learning: teach network explicitly to perform task (e. g Supervised learning: teach network explicitly to perform task (e.g. gradient descent algorithm) Unsupervised learning: learn interesting features “on its own”

Deep learning from 2006 onwards! Outperforms SVMs!!

Reservoir computing, SVM, other stuff presented in chapter seems to be overshadowed by deep learning

Restricted Boltzmann Machines (Smolensky ,1986, called them “harmoniums”) We restrict the connectivity to make learning easier. Only one layer of hidden units. We will deal with more layers later No connections between hidden units. In an RBM, the hidden units are conditionally independent given the visible states. So we can quickly get an unbiased sample from the posterior distribution when given a data-vector. This is a big advantage over directed belief nets hidden j i visible

(Deep Belief Network) (Restricted Boltzman Machine)

Principles: Neuron model: sigmodial seems good enough Topology: a flat 2D layer with a small vertical (deep) third dimension (as in our brains) Neurons should be capable of adjusting their weighing factors (a form of plasticity)