Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Kostas Kontogiannis E&CE
CS 678 –Relaxation and Hopfield Networks1 Relaxation and Hopfield Networks Totally connected recurrent relaxation networks Bidirectional weights (symmetric)
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Network Goodness and its Relation to Probability PDP Class Winter, 2010 January 13, 2010.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
December 7, 2010Neural Networks Lecture 21: Hopfield Network Convergence 1 The Hopfield Network The nodes of a Hopfield network can be updated synchronously.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Artificial Neurons: Hopfield Networks Seminar: Introduction to the Theory of Neural Computation Introduction Neurophysiological Background Modeling Simplified.
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Pattern Similarity and Storage Capacity of Hopfield Network Suman K Manandhar Prof. Ramakoti Sadananda Computer Science and Information Management AIT.
10/6/20151 III. Recurrent Neural Networks. 10/6/20152 A. The Hopfield Network.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
The Boltzmann Machine Psych 419/719 March 1, 2001.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
B. Stochastic Neural Networks
Constraint Satisfaction and Schemata Psych 205. Goodness of Network States and their Probabilities Goodness of a network state How networks maximize goodness.
Neural Networks with Short-Term Synaptic Dynamics (Leiden, May ) Misha Tsodyks, Weizmann Institute Mathematical Models of Short-Term Synaptic plasticity.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 3. Stochastic Dynamics in the Brain and Probabilistic Decision-Making in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning.
Neural Networks. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks 1mm Areas /
Information encoding and processing via spatio-temporal spike patterns in cortical networks Misha Tsodyks, Dept of Neurobiology, Weizmann Institute, Rehovot,
Designing High-Capacity Neural Networks for Storing, Retrieving and Forgetting Patterns in Real-Time Dmitry O. Gorodnichy IMMS, Cybernetics Center of Ukrainian.
Spiking Neural Networks Banafsheh Rekabdar. Biological Neuron: The Elementary Processing Unit of the Brain.
Lecture 9 Model of Hopfield
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
CSC321: Computation in Neural Networks Lecture 21: Stochastic Hopfield nets and simulated annealing Geoffrey Hinton.
CSC2535: Computation in Neural Networks Lecture 8: Hopfield nets Geoffrey Hinton.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
CSC321 Lecture 18: Hopfield nets and simulated annealing
Learning in Neural Networks
Neural Networks.
Ch7: Hopfield Neural Model
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
ECE 471/571 - Lecture 19 Hopfield Network.
Corso su Sistemi complessi:
Artificial Neural Networks
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Recurrent Networks A recurrent network is characterized by
Boltzmann Machine (BM) (§6.4)
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
Landscapes of the brain and mind
CS623: Introduction to Computing with Neural Nets (lecture-11)
CSC 578 Neural Networks and Deep Learning
Barak Blumenfeld, Son Preminger, Dov Sagi, Misha Tsodyks  Neuron 
Presentation transcript:

Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1

Associative Memory in Neural Networks Original work by John Hopfield (1982). The model is based on a recurrent network with stable attractors. 2

The Basic Idea Memory patterns are stored as stable attractors of a recurrent network. Each memory pattern has a basin of attraction in the phase space of the network. 3

4

Information Storage The information is stored in the pattern of synaptic interactions. 5

Energy Function The dynamics lead to one of the local minima of the energy function, which are the stored memories. In some models the dynamics are governed by an energy function 6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

Important properties of the model Content Addressable Memory (CAM) - Access to memory is based on the content and not an address. Error correction – The network “corrects” the neurons which are inconsistent with the memory pattern. 40

The Mathematical Model 41

Binary Networks We will use binary neurons: (-1) means ‘inactive’ and (+1) means ‘active’. The dynamics are given by: Input from within the network External input 42

Stability Condition for a Neuron The condition for a neuron to remain with the same activity is that its current activity and its current input have the same sign: 43

Energy Function If the external inputs are constant the network may reach a stable state, but this is not guaranteed (the attractors may be limit cycles and the network may even be chaotic). When the recurrent connections are symmetric and there is no self coupling we can write an energy function, such that at each time step the energy decreases or does not change. Under these conditions, the attractors of the network are stable fixed points, which are the local minima of the energy function. 44

Energy Function Mathematically, the conditions are: The energy is given by: And one can prove that: 45

Setting the Connections Our goal is to embed in the network stable stead-states which will form the memory patterns. To ensure the existence of such states, we will choose symmetric connections, that guarantee the existence of an energy function. 46

Setting the Connections We will denote the P memory patterns by: For instance, for a network with 4 neurons and 3 memory patterns, the patterns can be: 47

Setting the Connections Hopfield proposed the following rule: The correlation among neurons across memory patterns A normalizatio n factor 48

Choosing the Patterns to Store To enhance the capacity of the network we will choose patterns that are not similar to one another. In the Hopfield model, (-1) and (+1) are chosen with equal probabilities. In addition, there are no correlations among the neurons within a pattern and there are no correlations among patterns. 49

Memory Capacity Storing more and more patterns adds more constraints to the pattern of connections. There is a limit on the number of stable patterns that can be stored. In practice, a some point a new pattern will not be stable even if we set the network to this pattern. 50

Memory Capacity If we demand that at every pattern all neurons will be stable, we obtain: 51

Memory Capacity What happens to the system if we store more patterns? Initially, the network will still function as associative memory, although the local minima will differ from the memory states by a few bits. At some point, the network will abruptly stop functioning as associative memory. 52

Adding “Temperature” It is also interesting to consider the case of stochastic dynamics. We add noise to the neuronal dynamics in analogy with the temperature in physical systems. Physiologically, the noise can arise from random fluctuations in the synaptic release, delays in nerve conduction, fluctuations in ionic channels and more. 53

Adding “Temperature” h P(s ) 54

Adding “Temperature” h P(s ) 55

Adding “Temperature” Adding temperature has computational advantages: It drives the system out of spurious local minima, such that only the deep volleys in the energy landscape affect the dynamics. One approach is to start the system at high temperature and then gradually cool it down and allow it to stabilize (Simulated annealing). In general, increasing the temperature reduces the storage capacity but can prevent undesirable attractors. 56

Associative Memory - Summary The Hopfield model is an example of connecting between dynamical concepts (attractors and basins of attraction) and functional concepts (associative memory). The work pointed out the relation between neural networks and statistical physics and attracted many physicists to the field. 57

Associative Memory - Summary Over the years, models that are based on the same principles but are more biologically plausible were developed. Attractor networks are still useful in modelling a wide variety of phenomena. 58

References Hopfield, 1982 –Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational properties. Proceedings of the National Academy of Sciences of the USA, 79: Hopfield, 1984 –Hopfield, J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences of the USA, 81:

מקורות Amit, 1989 –Amit, D. Modeling Brain Function. Cambridge University Press, 1989 Hertz et al., 1991 –John Hertz, Anders Krogh, Richard G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley,

Associative Memory of Sensory Objects – Theory and Experiments Misha Tsodyks, Dept of Neurobiology, Weizmann Institute, Rehovot, Israel Joint work with Son Preminger and Dov Sagi 61

Non Friends …… Experiments - Terminology Friends 62

Experiment – Terminology (cont.) Basic Friend or Non-Friend task (FNF task) –Face images of faces are flashed for 200 ms –for each image the subject is asked whether the image is a friend image (learned in advance) or not. –50% of images are friends, 50% non-friends, in random order; each friend is shown the same number of times. No feedback is given F/NF ? 200ms F/NF ? ? 200ms 63

Morph Sequence 1… 20 Source (friend) Target (unfamiliar) …40…60…80100… Experiment – Terminology (cont.) 64

Two Pairs: Source and Target Pair 1 Pair 2 65

Subject HL (blue-green spectrum) days 1-18 FNF-Grad on Pair 1 Bin number Number of ‘Friend’ responses 66