Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and.

Similar presentations


Presentation on theme: "1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and."— Presentation transcript:

1 1 Maps in the Brain – Introduction

2 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and mental activity. Example: Map of sensory and motor representations of the body (homunculus).The more important a region, the bigger its map representation. Scaled “remapping” to real space

3 3 Place Field Recordings Terrain: 40x40cm y x Single cell firing activity y x  Map firing activity to position within terrain  Place cell is only firing around a certain position (red area)‏  Cell is like a “Position Detector”

4 4 Hippocampus Place cells Visual Olfactory Auditory Taste Somatosensory Self-motion Hippocampus involved in learning and memory All sensory input into hippocampus Place cells in hippocampus get all sensory information Information processing via trisynaptic loop How place are exactly used for navigation is unknown

5 5 Mathematics of the model  Firing rate r of Place Cell i at time t is modeled as Gaussian function: σ f is width of the Gaussian function, X and W are vectors of length n, ||* || is the euclidean distance  At every time step only on weight W is changed (Winner-Takes-All), i.e. the neuron with the strongest response is changed:

6 6 Maps of More Abstract Spaces

7 Visual cortex

8

9 Cortical Mapping retinal (x,y) to log Z Coordinates Real Spacelog Z Space Concentric Circles Vertical Lines (expon. Spaced)(equally spaced) Radial LinesHorizontal Lines (equal angular spacing)(equally spaced) On „Invariance“ A major problem is how the brain can recognize object in spite of size and rotation changes! Scaling and Rotation defined in Polar Coordinates: a = r exp(i  ) Scaling Rotation A = k r exp(i  ) = k a A = exp(i  r exp(i  ) = r exp(i[  ) = a exp(i  ) Rotation angle Scaling constant After log Z transform we get: Scaling: log(ka) = log(k) + log(a) Rotation: log(a exp(i  )) = i  + log(a) Thus we have obtained scale and rotation invariance !

10 10 Receptive fields Simple cells react to an illuminated bar in their RF, but they are sensitive to its orientation (see classical results of Hubel and Wiesel, 1959). Bars of different length are presented with the RF of a simple cell for a certain time (black bar on top). The cell's response is sensitive to the orientation of the bar. Cells in the visual cortex have receptive fields (RF). These cells react when a stimulus is presented to a certain area on the retina, i.e. the RF.

11 11 2d Map Colormap of preferred orientation in the visual cortex of a cat. One dimensional experiments like in the previous slide correspond to an electrode trace indicated by the black arrow. Small white arrows are VERTICES where all orientations meet.

12 12 Ocular Dominance Columns The signals from the left and the right eye remain separated in the LGN. From there they are projected to the primary visual cortex where the cells can either be dominated by one eye (ocular dominance L/R) or have equal input (binocular cells).

13 13 Ocular Dominance Columns The signals from the left and the right eye remain separated in the LGN. From there they are projected to the primary visual cortex where the cells can either be dominated by one eye (ocular dominance L/R) or have equal input (binocular cells). White stripes indicate left and black stripes right ocular dominance (coloring with desoxyglucose).

14 14 Ice Cube Model Columns with orthogonal directions for ocularity and orientation. Hubel and Wiesel, J. of Comp. Neurol., 1972

15 15 Ice Cube Model Columns with orthogonal directions for ocularity and orientation. Problem: Cannot explain the reversal of the preferred orientation changes and areas of smooth transitions are overestimated (see data). Hubel and Wiesel, J. of Comp. Neurol., 1972

16 16 Graphical Models Preferred orientations are identical to the tangents of the circles/lines. Both depicted models are equivalent. Vortex: All possible directions meet at one point, the vortex. Problem: In these models vortices are of order 1, i.e. all directions meet in one point, but 0° and 180° are indistinguishable. Braitenberg and Braitenberg, Biol.Cybern., 1979

17 17 Graphical Models Preferred orientations are identical to the tangents of the circles/lines. Both depicted models are equivalent. Vortex: All possible directions meet at one point, the vortex. Problem: In these models vortices are of order 1, i.e. all directions meet in one point, but 0° and 180° are indistinguishable. From data: Vortex of order 1/2. Braitenberg and Braitenberg, Biol.Cybern., 1979

18 18 Graphical Models cont'd In this model all vertices are of order 1/2, or more precise -1/2 (d-blob) and +1/2 (l-blob). Positive values mean that the preferred orientation changes in the same way as the path around the vertex and negative values mean that they change in the opposite way. Götz, Biol.Cybern., 1988

19 19 Developmental Models Start from an equal orientation distribution and develop a map by ways of a developmental algorithm. Are therefore related to learning and self-organization methods.

20 20 Model based on differences in On-Off responses KD Miller, J Neurosci. 1994

21 21 Difference Corre- lation Function Resulting receptive fields Resulting orientation map

22 Learning Synaptic Modifications 22

23 23 At the dendrite the incoming signals arrive (incoming currents) Molekules Synapses Neurons Local Nets Areas Systems CNS At the soma current are finally integrated. At the axon hillock action potential are generated if the potential crosses the membrane threshold The axon transmits (transports) the action potential to distant sites At the synapses are the outgoing signals transmitted onto the dendrites of the target neurons Structure of a Neuron:

24 24 Chemical synapse: Learning = Change of Synaptic Strength Neurotransmitter Receptors

25 25 Overview over different methods

26 26 Different Types/Classes of Learning  Unsupervised Learning (non-evaluative feedback) Trial and Error Learning. No Error Signal. No influence from a Teacher, Correlation evaluation only.  Reinforcement Learning (evaluative feedback) (Classic. & Instrumental) Conditioning, Reward-based Lng. “Good-Bad” Error Signals. Teacher defines what is good and what is bad.  Supervised Learning (evaluative error-signal feedback) Teaching, Coaching, Imitation Learning, Lng. from examples and more. Rigorous Error Signals. Direct influence from a teacher/teaching signal.

27 27 Basic Hebb-Rule: =  u i v  << 1 didi dt For Learning: One input, one output. An unsupervised learning rule: A supervised learning rule (Delta Rule): No input, No output, one Error Function Derivative, where the error function compares input- with output- examples. A reinforcement learning rule (TD-learning): One input, one output, one reward.

28 28 map Self-organizing maps: unsupervised learning Neighborhood relationships are usually preserved (+) Absolute structure depends on initial condition and cannot be predicted (-) input

29 29 Basic Hebb-Rule: =  u i v  << 1 didi dt For Learning: One input, one output An unsupervised learning rule: A supervised learning rule (Delta Rule): No input, No output, one Error Function Derivative, where the error function compares input- with output- examples. A reinforcement learning rule (TD-learning): One input, one output, one reward

30 30 I. Pawlow Classical Conditioning

31 31 Basic Hebb-Rule: =  u i v  << 1 didi dt For Learning: One input, one output An unsupervised learning rule: A supervised learning rule (Delta Rule): No input, No output, one Error Function Derivative, where the error function compares input- with output- examples. A reinforcement learning rule (TD-learning): One input, one output, one reward

32 32 Supervised Learning: Example OCR

33 33 The influence of the type of learning on speed and autonomy of the learner Correlation based learning: No teacher Reinforcement learning, indirect influence Reinforcement learning, direct influence Supervised Learning, Teacher Programming Learning Speed Autonomy

34 34 Hebbian learning A B A B t When an axon of cell A excites cell B and repeatedly or persistently takes part in firing it, some growth processes or metabolic change takes place in one or both cells so that A‘s efficiency... is increased. Donald Hebb (1949)

35 35 Overview over different methods You are here !

36 36 Hebbian Learning …Basic Hebb-Rule: …correlates inputs with outputs by the… =  v u 1  << 1 dd dt v u1u1  Vector Notation Cell Activity: v = w. u This is a dot product, where w is a weight vector and u the input vector. Strictly we need to assume that weight changes are slow, otherwise this turns into a differential eq.

37 37 =  v u 1  << 1 dd dt Single Input =  v u  << 1 dwdw dt Many Inputs As v is a single output, it is scalar. Averaging Inputs =   << 1 dwdw dt We can just average over all input patterns and approximate the weight change by this. Remember, this assumes that weight changes are slow. If we replace v with w. u we can write: =  Q. w where Q = is the input correlation matrix dwdw dt Note: Hebb yields an instable (always growing) weight vector!


Download ppt "1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and."

Similar presentations


Ads by Google