Presented by Rhee, Je-Keun

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

The Primary Visual Cortex
Activity-Dependent Development I April 23, 2007 Mu-ming Poo 1.Development of OD columns 2.Effects of visual deprivation 3. The critical period 4. Hebb’s.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Artificial neural networks:
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Visual Processing Structure of the Retina Lateral Inhibition Receptive Fields.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Color vision Different cone photo- receptors have opsin molecules which are differentially sensitive to certain wavelengths of light – these are the physical.
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Connected Populations: oscillations, competition and spatial continuum (field equations) Lecture 12 Course: Neural Networks and Biological Modeling Wulfram.
Summarized by Soo-Jin Kim
Unsupervised learning
Learning and Stability. Learning and Memory Ramón y Cajal, 19 th century.
Strong claim: Synaptic plasticity is the only game in town. Weak Claim: Synaptic plasticity is a game in town. Theoretical Neuroscience II: Learning, Perception.
The BCM theory of synaptic plasticity.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Visual Neuroscience Visual pathways Retina Photoreceptors Rods Cones Spacing and density ON and OFF pathways Ganglion cells Retinal projections SCN Pretectum.
Lecture 9: Introduction to Neural Networks Refs: Dayan & Abbott, Ch 7 (Gerstner and Kistler, Chs 6, 7) D Amit & N Brunel, Cerebral Cortex 7, (1997)
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to It was.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent (If f(x) is more complex we usually cannot.
Adaptive Cooperative Systems Chapter 8 Synaptic Plasticity 8.11 ~ 8.13 Summary by Byoung-Hee Kim Biointelligence Lab School of Computer Sci. & Eng. Seoul.
Ch 5. The Patterning of Neural Connections 5.5 ~ 5.6 Adaptive Cooperative Systems, Martin Beckerman, Summarized by Kwonill, Kim Biointelligence Laboratory,
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Ch 9. Rhythms and Synchrony 9.7 Adaptive Cooperative Systems, Martin Beckerman, Summarized by M.-O. Heo Biointelligence Laboratory, Seoul National.
Key points Early abnormal stimulation/activity leads to long-term functional deficits (in vision these deficits may be called amblyopias) Understand key.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Ch 8. Synaptic Plasticity 8.9 ~ 8.10 Adaptive Cooperative Systems, Martin Beckerman, Summarized by Kim, S. –J. Biointelligence Laboratory, Seoul.
Psychology 304: Brain and Behaviour Lecture 28
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
Biointelligence Laboratory, Seoul National University
Statistical-Mechanical Approach to Probabilistic Image Processing -- Loopy Belief Propagation and Advanced Mean-Field Method -- Kazuyuki Tanaka and Noriko.
Neural Networks.
Biointelligence Laboratory, Seoul National University
Biointelligence Laboratory, Seoul National University
Neuroscience: Exploring the Brain, 3e
Fitting Curve Models to Edges
Presented by Rhee, Je-Keun
In “Adaptive Cooperative Systems” Summarized by Ho-Sik Seok
Perception: Structures
Spatial Vision (continued)
Volume 56, Issue 2, Pages (October 2007)
Presented by Rhee, Je-Keun
Grossberg Network.
Volume 36, Issue 5, Pages (December 2002)
Biointelligence Laboratory, Seoul National University
Adaptive Resonance Theory
Vision: In the Brain of the Beholder
Grossberg Network.
Carlos D. Brody, J.J. Hopfield  Neuron 
Binocular Disparity and the Perception of Depth
Artificial neurons Nisheeth 10th January 2019.
Adaptive Resonance Theory
Adaptive Cooperative Systems Chapter 3 Coperative Lattice Systems
Biointelligence Laboratory, Seoul National University
Receptive Fields of Disparity-Tuned Simple Cells in Macaque V1
Volume 24, Issue 8, Pages e6 (August 2018)
Opponent Inhibition Neuron
Ch. 8 Synaptic Plasticity 8. 5 ~ 8
Outline Announcements Human Visual Information Processing
Volume 27, Issue 2, Pages (August 2000)
Volume 56, Issue 2, Pages (October 2007)
Cortical plasticity: Is it time for a change?
Visual Perception: One World from Two Eyes
Edge Detection via Lateral Inhibition
Presentation transcript:

Presented by Rhee, Je-Keun Ch 8. Synaptic Plasticity 8.7 ~ 8.8 Adaptive Cooperative Systems, Martin Beckerman, 1997. Presented by Rhee, Je-Keun

(C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ Contents 8.7 Cortical Response Properties 8.7.1 Circular Environment 8.7.2 Classical Rearing 8.7.3 Orientation Selectivity and Binocular Interactions 8.7.4 Receptive Field Properties 8.8 Mean-Field Network 8.8.1 Mean-Field Approximation 8.8.2 The Cortical Network (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

(C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ Circular Environment If the visual fields subtended by cells in the striate cortex are small, then patterned, or contoured, input will resemble noise-corrupted edges oriented in various directions. a K input pattern set {d1, d2,…, dK} The environment is then modeled as a circular matrix of inner products of the vectors d1, d2,…, dK. Assuming that the probabilities p(d = d1) = … = p(d = dK) = 1/K we now have 2K fixed points with selectivities 0, 1/K, 2/K, …, (K-1)/K, and there are K fixed points of maximum selectivity, (K-1)/K, with respect to d. If the vectors d1, d2,…, dK are orthogonal, or close to orthogonal, or perhaps even far from orthogonal, the K fixed points of maximum selectivity, m1, m2, …, mK, will be stable, and the system will converge to one of these fixed points regardless of its starting point in phase space. (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

(C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ Classical Rearing Classical rearing refers to a series of manipulations to kittens visual environment during the period of time during when (modifiable) synaptic geniculocortico and lateral corticocortico connections rapidly adjust their transmission efficiencies in response to changes in the visual stimulus. The classical rearing conditions include normal rearing (NR), monocular deprivation (MD), reverse suture (RS), strabismus (ST), or artificial squint, binocular deprivation (BD), and the restoration of normal vision following the period of deprived visual input (RE). (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Orientation Selectivity and Binocular Interactions The total input activity to the cortical cells includes contributions from signals produced by external stimuli, from spontaneous activity, and from non-LGN noise. Assuming that the spontaneous activity is a time-, afferent-, and eye-independent constant dsp where the subscript a denotes the observed activity. Consider the two types of afferent stimulation-random noise and patterned activity. Let us indicate random nose by nj and patterned activity term arising from external input as . (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Orientation Selectivity and Binocular Interactions The response of our cortical neurons to spontaneous and patterned activity where we have appended a term cn(t) for non-LGN noise to the formula for patterned activity. (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Receptive Field Properties In Linsker’s analysis, the receptive fields evolve from a symmetric center-surround form in the outermost layer to a form possessing a measure of orientation selectivity in the deeper layers. In the study by Kammen and Yuille, the two-dimensional receptive fields, representing a cell’s spatial response function, develop orientation selective properties as consequence of random fluctuation. Several studies have emphasized that the importance of natural patterned input in developing receptive field properties. Law and Cooper studies receptive field properties of BCM neurons. In the study receptive field formation was studied within the framework of BCM theory by including retinal processing of natural images. The sliding modification threshold was defined as the scale constant τ controls the overall rate of movement of the threshold. (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

(C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ Mean-Field Network The BCM theory presented in Section 8.6 was formulated for a single cortical target cell. In this section we present the extension of the BCM formulation to a network of excitatory and inhibitory neurons. In the network model, LGN neurons project to cortical excitatory and inhibitory cells, which in turn interact with one another through cortical-cortical synapses. The cortical-cortical interactions in the approach are mediated by a mean-field in a manner similar to ferromagnetic spins interacting through a Weiss molecular field. (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Mean-Field Approximation The input activity from the LGN is represented by the vector d. Consider the simple situation where the input from the two eyes are constant in time, and the same for all cortical cells so that di= d for all i. That is, we assume that each cell sees the same portion of the visual field. The LGN-cortical synaptic weights m become arrays of weights with each row representing a weight vector mi, for the ith cortical cell. The cortico-cortico synaptic weights are arranged as a matrix L with matrix elements Lij, denoting the connection strengths between axons of cell j and their dendritic targets on cell i. The activity, or firing rate, of cell i is the sum of contributions from the geniculocortico and cortico-cortico pathways: (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Mean-Field Approximation Introduce the mean activity <c> as the spatially averaged firing rate of all cortical neurons: The mean-field approximation consists in replacing cj in the cortico-cortico sum by the spatially averaged firing rate to yield To be consistent, the mean firing rate must satisfy the relation The firing rate for the ith cortical cells then becomes (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

Mean-Field Approximation The difference, i-j, is a measure of neighborness, and is not a function of the absolute positions of the cortical neurons. If it is assumed that the cortico-cortico synaptic strengths are a function of i-j alone, then Lij becomes a circulant matrix. In the mean-field approximation, the effect of the cortical network on the output activity is to shift the synaptic weight vector m by the mean-field a: In this model the network is assumed to be mildly inhibitory: L0<0 with |L0|<1 so that <c> remains positive. We have therefore a model involving excitatory geniculocorical synapses and weakly inhibitory cortico-cortico synapses. (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/

(C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/ The Cortical Network In the mean-field network, all synapses modify at the same rate. The model can be generalized to include synapses that modify rapidly and others that modify slowly or not at all. In this approach we have modifiable (m) and nonmodifiable (z) synapses. The synaptic evolution equations (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/