Chapter 7 Supervised Hebbian Learning.

Slides:



Advertisements
Similar presentations
Computational Neuroscience 03 Lecture 8
Advertisements

History, Part III Anatomical and neurochemical correlates of neuronal plasticity Two primary goals of learning and memory studies are –i. neural centers.
Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
Activity-Dependent Development I April 23, 2007 Mu-ming Poo 1.Development of OD columns 2.Effects of visual deprivation 3. The critical period 4. Hebb’s.
Justin Besant BIONB 2220 Final Project
Spike Timing-Dependent Plasticity Presented by: Arash Ashari Slides mostly from: 1  Woodin MA, Ganguly K, and Poo MM. Coincident pre-
Spike timing-dependent plasticity Guoqiang Bi Department of Neurobiology University of Pittsburgh School of Medicine.
Plasticity in the nervous system Edward Mann 17 th Jan 2014.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change.
Artificial neural networks.
DAVID SANTUCCI QUANTITATIVE BIOLOGY BOOTCAMP 2009 A BRIEF HISTORY OF THE SYNAPSE.
7/6/2015 Orthogonal Functions Chapter /6/2015 Orthogonal Functions Chapter 7 2.
1 Activity-dependent Development (2) Hebb’s hypothesis Hebbian plasticity in visual system Cellular mechanism of Hebbian plasticity.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Critical periods A time period when environmental factors have especially strong influence in a particular behavior. –Language fluency –Birds- Are you.
Design for a Genetic System Capable of Hebbian Learning Chrisantha Fernando Systems Biology Centre Birmingham University January 2006 Chrisantha Fernando.
Unsupervised learning
Supervised Hebbian Learning
Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.
Learning Processes.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
Hebbian Coincidence Learning
Fuzzy BSB-neuro-model. «Brain-State-in-a-Box Model» (BSB-model) Dynamic of BSB-model: (1) Activation function: (2) 2.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
The delta rule. Learn from your mistakes If it ain’t broke, don’t fix it.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
”When spikes do matter: speed and plasticity” Thomas Trappenberg 1.Generation of spikes 2.Hodgkin-Huxley equation 3.Beyond HH (Wilson model) 4.Compartmental.
Neural Networks 2nd Edition Simon Haykin
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Storage capacity: consider the neocortex ~20*10^9 cells, 20*10^13 synapses.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
3.Learning In previous lecture, we discussed the biological foundations of of neural computation including  single neuron models  connecting single neuron.
Chapter 8 Systems of Linear Equations in Two Variables Section 8.3.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Connectionist Modelling Summer School Lecture Two.
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Objective – To use tables to represent functions.
Real Neurons Cell structures Cell body Dendrites Axon
CSE P573 Applications of Artificial Intelligence Neural Networks
Simple learning in connectionist networks
Financial Informatics –XVII: Unsupervised Learning
Hebb and Perceptron.
Simple Learning: Hebbian Learning and the Delta Rule
Corso su Sistemi complessi:
CSE 573 Introduction to Artificial Intelligence Neural Networks
Differential Equations
Backpropagation.
Simple learning in connectionist networks
Introduction to Neural Network
Solving a System of Linear Equations
Intersection Method of Solution
Supervised Hebbian Learning
Presentation transcript:

Chapter 7 Supervised Hebbian Learning

Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application

Linear Associator Hebb’s Learning law可以用很多種網路架構,在此選用Linear Associator是為了專注於Learning law而不是網路架構 當input有很微小的變化時,Output也會跟著改變 還可以用什麼網路架構? Learn?

Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” D. O. Hebb, 1949 B 假設神經元A的軸突(axon)足以刺激鄰近的神經元B,當持續不斷的給予刺激,因而激發了神經元之新陳代謝, 促使神經元A激發神經元B的功效增加。 如果A神經元對於刺激有了反應 B對於刺激也有反映 那麼A、B之間的Weight就會被增強 Hebb定律提出,如果神經元i距離神經元j足夠近並能夠刺激神經元 j,重複這樣的活動,則兩個神經元之間的突觸連接就會加強,神經元 j對來自神經元 i 的刺激就會格外敏感 A

Hebb Rule(1/2) Traning Set? Postsynaptic Signal? Presynaptic Signal? Pjq:第q個input的第j個element aiq:第q個input的第i個element Alpha: learning rate 正整數 Hebb rule:Pj產生一個正的a  Weight增強 Wnew-Wold 和a、p之間有比例關係

Hebb Rule(2/2) 為什麼叫做Supervised form f、g為 identical function for all i,j identical function:在數學裡,恆等函數為一無任何作用的函數:它總是傳回和其引數相同的值。換句話說,恆等函數為函數f(x) = x Simplified form:對Hebb rule推廣,由數學式可知a、p必須是同號數Weight才會被增強;異號數則會使w減弱 Simplified form是unsupervised(沒有target output,本章在探討supervised所以將實際輸出換成target output alpha=1

Batch Operation W:假設weight matrix初始為0,做完Q個Training data

Performance Analysis(1/2) Input patterns are orthonormal, In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and both of unit length. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis. Pk:input Pq:training data

Performance Analysis(2/2)

Example(orthonormal)

Example(not orthogonal)

Example(not orthogonal)

Pseudoinverse Rule(1/3)

Pseudoinverse Rule(2/3) 由Linear Associator得到WP=T 若prototype input vector Pq orthonromal Wp=Tq  F(W)=0 P^-1大多都是independent R(Pq的維度)通常會大於Q(Pq的數目) 所以P不會是個方陣 所以反矩陣不存在 P:m*n If rank(P)=n left inverse 反矩陣 full rank

Pseudoinverse Rule(3/3) 𝑃 + is Moore-Penrose Pseudoinverse. The Pseudoinverse of a real matrix P is the uniqui matrix that satisfies P P + P=P P + P P + = P + P + P= P + P T P P + = P P + T

Relationship to the Hebb Rule If P M>=N full column rank PtP is invertible

Relationship to the Hebb Rule

Example

Autoassociative Memory

Noisy Patterns (7 pixels) Tests 50% Occluded 67% Occluded Noisy Patterns (7 pixels)

Variations of Hebbian Learning Basic Rule: Learning Rate: Unsupervised: Unsupervised : 用alpha限制增加量 Smoothing:用1-gamma讓變化量降低 gamma趨近0則變成standard rule 趨近1則不會有舊的W 只會存在新的 Smoothing: Delta Rule:

Solved Problems

Solved Problems

Solved Problems Solution:

Solved Problems

Solved Problems

Solved Problems