Arealization and Memory in the Cortex

Slides:



Advertisements
Similar presentations
Peer-to-Peer and Social Networks Power law graphs Small world graphs.
Advertisements

Small-world networks.
Collective Dynamics of ‘Small World’ Networks C+ Elegans: Ilhan Savut, Spencer Telford, Melody Lim 29/10/13.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Lecture 13: Associative Memory References: D Amit, N Brunel, Cerebral Cortex 7, (1997) N Brunel, Network 11, (2000) N Brunel, Cerebral.
VL Netzwerke, WS 2007/08 Edda Klipp 1 Max Planck Institute Molecular Genetics Humboldt University Berlin Theoretical Biophysics Networks in Metabolism.
Information Networks Small World Networks Lecture 5.
CSE 522 – Algorithmic and Economic Aspects of the Internet Instructors: Nicole Immorlica Mohammad Mahdian.
1 Evolution of Networks Notes from Lectures of J.Mendes CNR, Pisa, Italy, December 2007 Eva Jaho Advanced Networking Research Group National and Kapodistrian.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Advanced Topics in Data Mining Special focus: Social Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Network properties Slides are modified from Networks: Theory and Application by Lada Adamic.
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
Small-world networks. What is it? Everyone talks about the small world phenomenon, but truly what is it? There are three landmark papers: Stanley Milgram.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Hebbian Coincidence Learning
Random-Graph Theory The Erdos-Renyi model. G={P,E}, PNP 1,P 2,...,P N E In mathematical terms a network is represented by a graph. A graph is a pair of.
Using Graph Theory to Study Neural Networks (Watrous, Tandon, Conner, Pieters & Ekstrom, 2012)
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Slides are modified from Lada Adamic
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
DO LOCAL MODIFICATION RULES ALLOW EFFICIENT LEARNING ABOUT DISTRIBUTED REPRESENTATIONS ? A. R. Gardner-Medwin THE PRINCIPLE OF LOCAL COMPUTABILITY Neural.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
1 Friends and Neighbors on the Web Presentation for Web Information Retrieval Bruno Lepri.
March 3, 2009 Network Analysis Valerie Cardenas Nicolson Assistant Adjunct Professor Department of Radiology and Biomedical Imaging.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Cmpe 588- Modeling of Internet Emergence of Scale-Free Network with Chaotic Units Pulin Gong, Cees van Leeuwen by Oya Ünlü Instructor: Haluk Bingöl.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Network (graph) Models
Connectivity and the Small World
Structures of Networks
Hiroki Sayama NECSI Summer School 2008 Week 2: Complex Systems Modeling and Networks Network Models Hiroki Sayama
CSC321 Lecture 18: Hopfield nets and simulated annealing
Lecture 1: Complex Networks
Real Neurons Cell structures Cell body Dendrites Axon
Applications of graph theory in complex systems research
Peer-to-Peer and Social Networks
Network Science: A Short Introduction i3 Workshop
The Watts-Strogatz model
distributed representations summation of inputs Hebbian plasticity ?
The Hopfield Model - Jonathan Amazon.
N-Gram Model Formulas Word sequences Chain rule of probability
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
distributed representations summation of inputs Hebbian plasticity ?
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
The modular perspective
Local Clustering Coefficient
Linking Memories across Time via Neuronal and Dendritic Overlaps in Model Neurons with Active Dendrites  George Kastellakis, Alcino J. Silva, Panayiota.
Yann Zerlaut, Alain Destexhe  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Network Science: A Short Introduction i3 Workshop
The Network Approach: Mind as a Web
Introduction to Neural Network
Sanguthevar Rajasekaran University of Connecticut
Advanced Topics in Data Mining Special focus: Social Networks
CS623: Introduction to Computing with Neural Nets (lecture-11)
Volume 87, Issue 7, Pages (December 1996)
Word.
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Arealization and Memory in the Cortex Main perspectives: Hierarchical Modular Content-based monkey

The hierarchical perspective The Elizabeth Gardner approach ri = g[∑jwijHEBBrj-Θ]+ wijHEBB ≈ ∑μ riμrjμ ..instead of neural activity (as in the Hopfield model).. riμ = g[∑jwijrjμ-Θ]+ ..do thermodynamics over connection weights, i.e. consider whether among all their possible values, there are which satisfy

The hierarchical perspective The Elizabeth Gardner approach Backpropagation and E-M algorithms Network activation Forward Step: Δr Error propagation Backward Step: Δw Expectation – sampling the world Maximization – of the match between the world and our internal model of the world

The hierarchical perspective The Elizabeth Gardner approach Backpropagation and E-M algorithms Generative models paper by Hinton & Gharamani

The modular perspective The Braitenberg model N pyramidal cells √N compartments √N cells each A pical synapses B asal synapses

The modular perspective The Braitenberg model Modular associative memories Memory glass & capacity issues (with D O’Kane) Sparsity (with C Fulvi Mari) Latching dynamics (with E Kropff)

The modular perspective The Braitenberg model Modular associative memories Metricity in associative memory

slides of Anastasia Anishchenko (Brown) Autoassociative memory retrieval is often studied using neural network models, in which connectivity does not follow any geometrical pattern, i.e. it is either all-to-all or, if sparse, randomly assigned Storage capacity (max # patterns that can be retrieved) is proportional to the number k of connections per unit Networks with a regular geometrical rule informing their connectivity, instead, often display geometrical patterns of activity, i.e. stabilize into activity profile `bumps` of width proportional to k Recently, applications in various fields have used the fact that small-world networks, characterized by a connectivity intermediate between regular and random, have different graph theoretic properties than either regular or random networks Autoassociative memory retrieval?.. Geometrical activity patterns?..

GRAPH THEORY DEFINITIONS CREATING A SMALL WORLD Watts, Strogatz 1998: Start with a 1D lattice ring Rewire each edge at random with probability p Rewiring of only a few edges is enough Graph consists of a nonempty set of elements, called vertices, and a list of unordered pairs of elements, called edges Order n of the graph = number of vertices Coordination number k = average number of edges connected to one vertex GRAPH THEORY DEFINITIONS

PATH LENGTH and CLUSTERING Characteristic path length L = length of the shortest path between two vertices, averaged over all pairs of vertices L = average number of links in the shortest chain connecting two people x0 x1 xM L(x0, xM) = M x3 x2 Clustering coefficient C = average fraction of vertices linked.to one, which … are also linked to each other All-to-all connectivity: C = 1 Random connections: C = k / n << 1 for a large network

THREE DIFFERENT WORLDS Cregular= C (0) ≈ ¾ LARGE Lregular= L (0) ≈ n/2k LARGE Regular: Random: Small World: SW C (p) ≈ C (0) small L (p) ≈ L (1) small Crandom= C (1) ≈ k/n LARGE Lrandom= L (1) ≈ ln(n)/ln(k) small REG RND Figure from Watts, Strogatz 1998

THE BRAIN IS A SMALL WORLD Characteristic path length L and clustering coefficient C for cortex: n = 1011 neurons, k = 104 connections per neuron Lregular ≈ n / 2k ≈ 107 synapses too large Lrandom ≈ ln (n) / ln (k) ≈ 3 synapses realistic! But: Crandom ≈ k / n ≈ 10-7 too small Cortex has short L (as random graph) and large C (as regular lattice) ð by definition, brain is a small world!..

NETWORK MODEL V0 = -70mV, Vthr= -54mV, m = 5 msec Connections: p Store M = 5 patterns , drawn from a binary distribution (1 or 0) Sparseness  = 0.2 Average number of connections per neuron k = 50 N = 1000 neurons on a ring Integrate and fire: V0 = -70mV, Vthr= -54mV, m = 5 msec 1 = 30 msec, 2 = 4 msec, tref = 3 msec Connections: excitatory, all the same strength probabilistic Gaussian with a baseline: global inhibition, inh = 4 msec p Normalized modification of synaptic strength: Give a cue for one of the patterns stored “+” current into the “1” cells “-” current into the “0” cells Part of the cue may be randomly corrupted (partial cue)

CALCULATING L and C Analytical estimation of clustering coefficient: C = [1/sqrt(3) - k/n]*(1-p)3 + k/n Comparison with the numerical results (n = 1000, k = 50): Numerical results for L and C as functions of the parameter of randomness Normalized and averaged over three network realizations

SPONTANEOUS ACTIVITY BUMPS Number of spikes Nets with a regular geometrical rule guiding their connections (parameter of randomness p -> 0) often display geometrical patterns of activity, i.e. stabilize into activity profile "bumps“ The width of the “bump” is proportional to number k of connections per neuron Neuron

ASSOCIATIVE MEMORY cue Overlaps Oi , i = 1..M of network activity with the M = 5 patterns stored - before, during, & after a 75%-corrupted cue for pattern 2 was given Samples taken for 50 msec every 50 msec during the simulation time

RETRIEVAL PERFORMANCE Retrieval performance degrades gradually as the cue quality decreases Memories can be retrieved with even when 90% of the cue input is corrupted

RETRIEVAL vs. “BUMPINESS” - I The ability to retrieve memories dies if p 0.4 “prefers randomness” The ability to form activity bumps dies if p 0.6 “prefers regularity” Can they both be “alive enough” when 0.4 < p < 0.6 ?..

Effect of changing cue size

RETRIEVAL vs. “BUMPINESS” - II ”almost”: Is there still a chance for a successful coexistence at the same p?.. Increasing number k of connections per neuron . . ...helps the memories . and interferes with the bumps Changing k does not affect the qualitative picture, i.e. retrieval performance and bumpiness favor almost non-overlapping ranges of p

RETRIEVAL vs. “BUMPINESS” - II ”almost”: Is there still a chance for a successful coexistence at the same p?.. Increasing number k of connections per neuron . . ...helps the memories . and interferes with the bumps Changing k does not affect the qualitative picture, i.e. retrieval performance and bumpiness favor almost non-overlapping ranges of p

CONCLUSIONS The spontaneous activity bumps, which are formed in the regular network, can be observed up to p = 0.6 Storing random binary patterns in the network does not affect the bumps, but the retrieval performance appears to be very poor for small p As the randomness increases (p > 0.4), a robust retrieval is observed even for partial cues Changing k does not affect the qualitative network behavior The abilities to form stable activity bumps and to retrieve associative memories are favored at distinct ranges of the parameter of randomness The “almost” question… Special thanks to the EU Advanced Course in Computational Neuroscience - Obidos, Portugal 2002

New CONCLUSIONS NEXT SLIDE Those were from Anastasia’s simulations Enters Yasser with analytical calculations on a simpler (threshold-linear) model, supported by extensive simulations NEXT SLIDE

The content-based perspective An example: Plaut’s model of semantic memory  .pdf