Covariation Learning and Auto-Associative Memory

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Pattern Association.
Lecture 13: Associative Memory References: D Amit, N Brunel, Cerebral Cortex 7, (1997) N Brunel, Network 11, (2000) N Brunel, Cerebral.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Brain-like design of sensory-motor programs for robots G. Palm, Uni-Ulm.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
SME Review - September 20, 2006 Neural Network Modeling Jean Carlson, Ted Brookings.
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
(Page 554 – 564) Ping Perez CS 147 Summer 2001 Alternative Parallel Architectures  Dataflow  Systolic arrays  Neural networks.
A Theory of Cerebral Cortex (or, “How Your Brain Works”) Andrew Smith (CSE)
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
18 1 Hopfield Network Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
Neural Networks (NN) Ahmad Rawashdieh Sa’ad Haddad.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Unsupervised learning
Supervised Hebbian Learning
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
Biological Modeling of Neural Networks Week 6 Hebbian LEARNING and ASSOCIATIVE MEMORY Wulfram Gerstner EPFL, Lausanne, Switzerland 6.1 Synaptic Plasticity.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
YI, SeongBae A transition to modern: Hebb. Questions What is the main idea of Hebb’s theory if we say in a easy way? Why it is important to repeat to.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Ten MC Questions taken from the Text, slides and described in class presentation. COSC 4426 AJ Boulay Julia Johnson.
Framework For PDP Models Psych /719 Jan 18, 2001.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Lecture 9 Model of Hopfield
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
An Associative Memory based on a Mixed-Signal Cellular Neural Network Michael Flynn, Daniel Weyer.
Memory Network Maintenance Using Spike-Timing Dependent Plasticity David Jangraw, ELE ’07 Advisor: John Hopfield, Department of Molecular Biology 12 T.
Week 5 NETWORKS of NEURONS and ASSOCIATIVE MEMORY Wulfram Gerstner EPFL, Lausanne, Switzerland 5.1 Introduction - networks of neuron - systems for computing.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Neural Networks.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
Real Neurons Cell structures Cell body Dendrites Axon
Richard Dorrance Literature Review: 1/11/13
Dr. Unnikrishnan P.C. Professor, EEE
Capacity of auto-associative networks
Simple learning in connectionist networks
ECE 471/571 - Lecture 19 Hopfield Network.
A principled way to principal components analysis
XOR problem Input 2 Input 1
Corso su Sistemi complessi:
Hopfield Network.
Spike timing dependent plasticity
Associative Learning.
Simple learning in connectionist networks
ARTIFICIAL NEURAL networks.
The Network Approach: Mind as a Web

Associative Learning.
CS623: Introduction to Computing with Neural Nets (lecture-11)
Edge Detection via Lateral Inhibition
Presentation transcript:

Covariation Learning and Auto-Associative Memory Chapter 4, Tutorial on Neural Systems Modeling, Anastasio Learning Objectives: After mastering the material in this lesson, you should be able to: explain how the principle “neuron that fire together, wire together” can by used in neural circuits to store memories compute, by hand, the change in synaptic weights in a neuronal network due to different stimulus patterns use computer code to simulate memory formation and recall in a neuronal network

A simple model of habituation in Aplysia Input x contacts y with weight v Tutorial on Neural Systems Modeling, Anastasio, p. 7

Donald Hebb Neurons that fire together, wire together.

Auto-associator neural network Strengthened connections between activated neurons Pattern of activity from sensory input (e.g., hearing a song) --note that the connections that do NOT form are just as important as the connections that DO form --called an “auto-associative neural network” because it learns to associate different neurons with each other Entire pattern recovered Partial pattern presented (e.g., first few notes from a song)

4 different learning rules --for “post-synaptic,” you ‘punish’ the connection if the post-synaptic neuron is active when the pre-synaptic neuron is not “punish” when post-syn neuron is active when pre-syn is not Not biological— but useful

Summing over multiple patterns Pattern index Hebb Hopfield Pre-synaptic Post-synaptic i Pre-syn neuron index Post-syn neuron index Anastasio, p. 103

Implementing the Hopfield rule in a simple network Hopfield i p1 = [1 1 0] p2 = [0 0 1] How do the weights change when these two patterns are presented?

Set P = [1 1 0; 0 0 1; 1 0 1] What is the resulting matrix HP Set P = [1 1 0; 0 0 1; 1 0 1] What is the resulting matrix HP? Can you explain the change? HP measures the covariance between two neurons. The more they “do the same thing,” the more positive their connections become. The more they “do the opposite things,” the more negative their connections become. If they do the same thing half the time and the opposite thing half the time, their connection strength will be zero.

Run autoConnectivity.m for P = [1 1 1 1 0 0 0 0 0 0; 0 0 0 0 0 0 1 1 1 1] Compare the results for the four different learning rules.

Results for four different learning rules