What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Bioinspired Computing Lecture 16
Computational Neuroscience 03 Lecture 8
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Neural Networks Computing
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
B.Macukow 1 Lecture 3 Neural Networks. B.Macukow 2 Principles to which the nervous system works.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Machine Learning Neural Networks
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
CSE 153 Cognitive ModelingChapter 3 Representations and Network computations In this chapter, we cover: –A bit about cortical architecture –Possible representational.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Contents Hebb. Learn. Patt. Assoc. Associator Correlations CS 476: Networks of Neural Computation, CSD, UOC, 2009 Examples Conclusions WK7 – Hebbian Learning.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter 2 Matrices Definition of a matrix.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Lecture 09 Clustering-based Learning
Biologically-Inspired Neural Nets Modeling the Hippocampus.
Supervised Hebbian Learning
Associative-Memory Networks Input: Pattern (often noisy/corrupted) Output: Corresponding pattern (complete / relatively noise-free) Process 1.Load input.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
10/6/20151 III. Recurrent Neural Networks. 10/6/20152 A. The Hopfield Network.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Unsupervised Learning
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
381 Self Organization Map Learning without Examples.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks 1mm Areas /
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Cholinergic Modulation of the Hippocampus Computational Models of Neural Systems Lecture 2.5 David S. Touretzky September, 2007.
Learning in Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Simple learning in connectionist networks
Grossberg Network.
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Backpropagation.
Sparseness and Expansion in Sensory Representations
Simple learning in connectionist networks
Introduction to Neural Network
Volume 27, Issue 2, Pages (August 2000)
Presentation transcript:

What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators

"Hebb-Marr" associative nets McNaughton & Morris, 1987

Between 1968 and 1971, David Marr published the first series of theoretical papers that attempted to interpret the anatomical connections and known physiology of specific brain structures (cerebellum, hippocampus, neocortex) in terms of associative memory systems using Hebb's rule. He made numerous specific predictions about the physiological properties of these nets that have since been verified.

Competitive Networks “discover” structures in input space may remove redundancy may orthogonalize may categorize may sparsify representations can separate out even linear combinations can easily be generalized to self- organizing topographic maps

Competitive Networks “discover” structures in input space may remove redundancy may orthogonalize may categorize may sparsify representations can separate out even linear combinations can easily be generalized to self- organizing topographic maps

Pattern associators generalize to nearby input vectors gracefully degrade/fault tolerant may extract prototypes may reduce fluctuations are fast (feed forward) p (the # of associations) is proportional to C (# inputs/unit) and also grows with decreasing a (sparser patterns) Y X r (X)  r (Y) after learning

Associative retrieval: The inhibitory interneuron divides down the excitation in proportion to the total input to the net, so that only the most strongly excited cells reach threshold (i.e., integer division).

Pattern completion: The inhibitory interneuron divides down the excitation in proportion to the total input to the net, so that only the most strongly excited cells reach threshold (i.e., integer division).

Pattern associators generalize to nearby input vectors gracefully degrade/fault tolerant may extract prototypes may reduce fluctuations are fast (feed forward) p (the # of associations) is proportional to C (# inputs/unit) and also grows with decreasing a (sparser patterns) Y X r (X)  r (Y) after learning

Storage is formally equivalent to forming the so-called outer product of two binary vectors. Multiple memories are stored by combining outer product matrices using the logical OR operation. Retrieval is formally equivalent to taking the inner (or dot) product of the input vector and the storage matrix (i.e., multiply through the rows and sum the columns) and subjecting it to a non-linear normalization (i.e., integer division).

Inhibitory cells must be driven by the same excitatory afferents that activate the principal cells. The inhibitory mechanism must implement a division operation on the excitation that reaches the cell body from the dendrites. Inhibitory cells can be much fewer in number than principal cells, but they must have extensive connectivity. Inhibitory cells must respond to a synchronous input at lower threshold and at shorter latency than principal cells. Shorter latency is necessary to ensure that the appropriate division operation is already set up at the somata of the principal cells by the time the excitation from the same input arrives there via the principal cell dendrites. Whereas principal cells will be quite selective in their response characteristics, inhibitory neurons will not be particular about which afferents are active at a given time, only about how many are active. Thus they will convey little information in the principal cells' response domain. Excitatory synapses onto interneurons should not be plastic In unfamiliar situations (i.e. when current input elicits reduced output) extrinsic modulation of inhibitory neurons might lower output threshold, successively probing for a complete pattern. This might also serve as a gate enabling the activation of the synaptic modification process. Properties of interneurons predicted by the simple Hebb-Marr net model

Freund & Buzsaki, 1996 Axons of basket inhbitory interneurons project widely. 1-2 mm in transverse plane, 2-4 mm in long axis

CA3 is dominated by recurrent collaterals

Autoassociation in a recurrent net

The input pattern is kept on for 2 time steps so that the output at time t-1 is associated with the input at time t

Recurrent net after storage of 3 input vectors

Error correction (pattern completion)

"Reverberation": persistence of pattern after removal of input Persistence requires synaptic weights to be symmetrical: W ij = W ji, although for large nets symmetry does not have to be perfect.

If inputs change each cycle, the output at time t-1 is associated with the input at time t. Sequence Learning in Recurrent Nets

This leads to sequence storage.

The sequence may be of arbitrary length.

Presentation of any vector in the sequence leads to sequence completion if and only if all elements of the sequence are unique.

Autoassociators complete a partial cue can continue a sequence generalize gracefully degrade/fault tolerant extract prot. / reduce fluct. are also fast, but their feedback may sustain short term memory p c is again proportional to C and grows with decreasing a p c = k C / [a log (1/a)] r (Y’)  r (Y) after learning Y

Constraints on Storage Capacity 1) Connectivity. In real networks connectivity is much less than all- to-all. Capacity is proportional to the number of synapses/unit, C. 2) Sparsity of coding, a. Sparsity roughly speaking refers to the fraction of the units active at any one time. The continuum ranges from extremely sparse (a=1/N) to fully distributed (a=1). Sparse activity allows more patterns, but not more information. 3) "Orthogonality" of coding. By definition, orthogonal vectors have an overlap of zero. In practice, the maximum number of stored patterns is achieved when the patterns are uncorrelated, not mutually orthogonal. Correlated patterns cause local saturation.

Sparser inputs to the associative memory system can be generated by more efficient coding of the input (redundancy reduction). Much of the processing of visual and other sensory information in 'primary' and 'secondary' sensory cortex is devoted to this process of "feature extraction", with various simple feature detectors developmentally programmed. Association cortex learns to develop high order feature detectors based on past experience.