Maps in the Brain – Introduction

Slides:



Advertisements
Similar presentations
Chapter 2.
Advertisements

Gabor Filter: A model of visual processing in primary visual cortex (V1) Presented by: CHEN Wei (Rosary) Supervisor: Dr. Richard So.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Chapter 3: Neural Processing and Perception. Lateral Inhibition and Perception Experiments with eye of Limulus –Ommatidia allow recordings from a single.
Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.
1 Maps in the Brain – Introduction. 2 Overview A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place.
Sensation and Perception - audition.ppt © 2001 Laura Snodgrass, Ph.D.1 Audition Anatomy –outer ear –middle ear –inner ear Ascending auditory pathway –tonotopic.
Maps of More Abstract Spaces. Auditory Maps Auditory information (air pressure fluctuation) undergo a complex cascade or transformation before it reaches.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Chapter 6 The Visual System
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Orthogonal representation of sound dimensions in the primate midbrain Simon Baumann, Timothy D Griffiths, Li Sun, Christopher I Petkov, Alexander Thiele.
Color vision Different cone photo- receptors have opsin molecules which are differentially sensitive to certain wavelengths of light – these are the physical.
Structure and function
Maps in the Brain – Introduction. Overview A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place Fields.
Un Supervised Learning & Self Organizing Maps Learning From Examples
The Auditory System Sound is created by pressure waves in air; these waves are induced by vibrating membranes such as vocal cords. Because the membranes.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Maps in the Brain and their models. Where are things happening in the brain. Is the information represented locally ? The Phrenologists view at the brain.
Unsupervised learning
1 Maps in the Brain – Introduction. 2 Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and.
Maps in the Brain – Introduction. Overview A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place Fields.
Self Organizing Maps (SOM) Unsupervised Learning.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
1 Computational Vision CSCI 363, Fall 2012 Lecture 10 Spatial Frequency.
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
THE VISUAL SYSTEM: EYE TO CORTEX Outline 1. The Eyes a. Structure b. Accommodation c. Binocular Disparity 2. The Retina a. Structure b. Completion c. Cone.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
1 30 Verschiedene Körperteile werden unterschiedlich im motorischen bzw. somatosens. Cortex repräsentiert. Erklären Sie welche Auswirkung dies auf die.
Introduction to Computer Graphics
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Understanding early visual coding from information theory By Li Zhaoping Lecture at EU advanced course in computational neuroscience, Arcachon, France,
Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.
Understanding Psychophysics: Spatial Frequency & Contrast
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Cogs1 mapping space in the brain Douglas Nitz – Feb. 19, 2009 any point in space is defined relative to other points in space.
1 Computational Vision CSCI 363, Fall 2012 Lecture 32 Biological Heading, Color.
Independent Component Analysis features of Color & Stereo images Authors: Patrik O. Hoyer Aapo Hyvarinen CIS 526: Neural Computation Presented by: Ajay.
1 Perception and VR MONT 104S, Spring 2008 Lecture 3 Central Visual Pathways.
Maps of More Abstract Spaces
General Principles: The senses as physical instruments
Maps in the Brain – Introduction
Brodmann’s Areas. Brodmann’s Areas The Primary Visual Cortex Hubel and Weisel discovered simple, complex and hypercomplex cells in the striate.
Optic Nerve Projections
Mark Sayles, Ian M. Winter  Neuron 
Guangying K. Wu, Pingyang Li, Huizhong W. Tao, Li I. Zhang  Neuron 
Soumya Chatterjee, Edward M. Callaway  Neuron 
Ian M. Finn, Nicholas J. Priebe, David Ferster  Neuron 
EE513 Audio Signals and Systems
Volume 66, Issue 6, Pages (June 2010)
Andrea Benucci, Robert A. Frazor, Matteo Carandini  Neuron 
Binocular Disparity and the Perception of Depth
CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation  Joshua P.
8.5 Modulation of Signals basic idea and goals
Nicholas J. Priebe, David Ferster  Neuron 
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Xiangying Meng, Joseph P.Y. Kao, Hey-Kyoung Lee, Patrick O. Kanold 
Tuning to Natural Stimulus Dynamics in Primary Auditory Cortex
Dynamic Shape Synthesis in Posterior Inferotemporal Cortex
Opponent Inhibition Neuron
Construction of Complex Receptive Fields in Cat Primary Visual Cortex
Presentation transcript:

Maps in the Brain – Introduction

Overview Cortical Maps: Development and (Re-)Structuring Auditory Maps A few words about Maps Cortical Maps: Development and (Re-)Structuring Auditory Maps Visual Maps Place Fields

What are Maps I Intuitive Definition: Maps are a (scaled) depiction of a certain area. Location (x,y) is directly mapped to a piece of paper. Additional information such as topographical, geographical, political can be added as colors or symbols.

What are Maps I Intuitive Definition: Maps are a (scaled) depiction of a certain area. Location (x,y) is directly mapped to a piece of paper. Additional information such as topographical, geographical, political can be added as colors or symbols. Important: A map is always a reduction in complexity. It is a REDUCED picture of reality that contains IMPORTANT aspects of it. What is important? That is in the eye of the beholder...

What are Maps II Mathematical Definition: Let W be a set, U a subset of W and A metric space (distances are defined). Then we call f a map if it is a one-to-one mapping from U to A. f: U -> A Example: The surface of the world (W) is a 2D structure embedded in 3D space. It can be mapped to a 2D euclidean space. In a mathematical sense a map is an equivalent representation of a complex structure (W) in a metric space (A), i.e. it is not a reduction – the entire information is preserved.

Cortical Maps Cortical Maps map the environment onto the brain. This includes sensory input as well as motor and mental activity. Example: Map of sensory and motor representations of the body (homunculus).The more important a region, the bigger its map representation. Scaled “remapping” to real space

Spatial Maps Place Cells

What are place cells? Place cells are the principal neurons found in a special area of the mammal brain, the hippocampus. They fire strongly when an animal (a rat) is in a specific location of an environment. Place cells were first described in 1971 by O'Keefe and Dostrovsky during experiments with rats. View sensitive cells have been found in monkeys (Araujo et al, 2001) and humans (Ekstrom et al, 2003) that may be related to the place cells of rats.

The Hippocampus Human hippocampus

The Hippocampus Human hippocampus Rat hippocampus

Two old slides

Hippocampus Hippocampus involved in learning and memory Place cells Visual Olfactory Auditory Taste Somatosensory Self-motion Hippocampus involved in learning and memory All sensory input into hippocampus Place cells in hippocampus get all sensory information Information processing via trisynaptic loop How place are exactly used for navigation is unknown

Place cell recordings 1. Electrode array is inserted to the brain for simultaneous recording of several neurons. Wilson and McNaughton, 1993

Place cell recordings 1. 2. Electrode array is inserted to the brain for simultaneous recording of several neurons. The rat moves around in a known/unknown environment. Wilson and McNaughton, 1993

Place cell recordings 1. 2. 3. Electrode array is inserted to the brain for simultaneous recording of several neurons. The rat moves around in a known/unknown environment. Walking path and firing activity (cyan dots). Wilson and McNaughton, 1993

Place Field Recordings Terrain: 40x40cm Single cell firing activity y x y x Map firing activity to position within terrain Place cell is only firing around a certain position (red area)‏ Cell is like a “Position Detector”

Place fields 40x40cm O’Keefe, 1999 Array of cells Ordered for position of activity peak (top left to bottom right)‏ O’Keefe, 1999

Place fields 40x40cm O’Keefe, 1999 Array of cells Ordered for position of activity peak (top left to bottom right)‏ Different shapes: Circular Islands O’Keefe, 1999

Place fields 40x40cm O’Keefe, 1999 Array of cells Ordered for position of activity peak (top left to bottom right)‏ Different shapes: Circular Islands Twin Peaks O’Keefe, 1999

Place fields 40x40cm O’Keefe, 1999 Array of cells Ordered for position of activity peak (top left to bottom right)‏ Different shapes: Circular Islands Twin Peaks Elongated O’Keefe, 1999

Place fields 40x40cm O’Keefe, 1999 Array of cells Ordered for position of activity peak (top left to bottom right)‏ Different shapes: Circular Islands Twin Peaks Elongated Not Simple (=> not published)‏ O’Keefe, 1999

How do place cells develop? Allothetic (external) sensory input Visual Olfactory (around 1000 receptors in rat, whereas humans have 350)‏ Somatosensory (via whiskers)‏ Auditory (rat range 200Hz-90KHz, human range 16Hz-20KHz)‏ Idiothetic (internal) sensory input Self motion (path integration, mostly used then allothetic information is not available)‏ Not so reliable by itself since no feedback

Importance of visual cues Experiment: Environment with landmark (marked area) => record activity from cell 1 and 2 Observation: Place fields develop Knierim, 1995

Importance of visual cues Experiment: Environment with landmark (marked area) => record activity from cell 1 and 2 Observation: Place fields develop Step 2: Rotate landmark => place fields rotate respectively Conclusion: Visual cues are used for formation of place fields Knierim, 1995

Place Cell Remapping Brown plastic square box and white wooden circle box was used to show place cell remapping phenomena: Cells 1-5 show increasing divergence between the square and circle box; Cells 6-10 show differentiation from the beginning; Some cells chow common representation or do not remap at all (not shown). Wills et al, 2005, Science

Importance of olfactory cues Fact: Rats use their urine to mark environment Experiment: Two sets, one in light and one in darkness; remove self-induced olfactory cues and landmarks (S2-S4)‏ Result: Without olfactory cues stable place fields (control S1) change or in darkness even deteriorate. When olfactory cues are allowed again (control S5), place fields reemerge. Dark/Cleaning Light/Cleaning Save, 2000

Place cell model Use neuronal network to model formation of place cells

Place cell model Use neuronal network to model formation of place cells Input layer for allothetic sensory input depending on position in simulated world 4 Visual cues (landmarks)‏

Place cell model Use neuronal network to model formation of place cells Input layer for allothetic sensory input depending on position in simulated world 4 Visual cues (landmarks)‏ 4 Olfactory cues (environmental)‏

Place cell model Use neuronal network to model formation of place cells Input layer for allothetic sensory input depending on position in simulated world 4 Visual cues (landmarks)‏ 4 Olfactory cues (environmental)‏ Output layer, n x n simulated neurons, each of which is connected to all input neurons (fully connected feed-forward)‏

Place cell model Use neuronal network to model formation of place cells Input layer for allothetic sensory input depending on position in simulated world 4 Visual cues (landmarks)‏ 4 Olfactory cues (environmental)‏ Output layer, nxn simulated neurons, each of which is connected to all input neurons (fully connected feed-forward)‏ After learning => formation of place fields

Place cell model Use neuronal network to model formation of place cells Input layer for allothetic sensory input depending on position in simulated world 4 Visual cues (landmarks)‏ 4 Olfactory cues (environmental)‏ Output layer, nxn simulated neurons, each of which is connected to all input neurons (fully connected feed-forward)‏ After learning => formation of place fields The know-how is in the change of the connection weights W ...

Mathematics of the model Firing rate r of Place Cell i at time t is modeled as Gaussian function: σf is width of the Gaussian function, X and W are vectors of length n, ||* || is the euclidean distance

Mathematics of the model Firing rate r of Place Cell i at time t is modeled as Gaussian function: σf is width of the Gaussian function, X and W are vectors of length n, ||* || is the euclidean distance At every time step only on weight W is changed (Winner-Takes-All), i.e. the neuron with the strongest response is changed:

Place fields Visual input => unique round place fields, because the distances to the walls are unique (no multipeaks)‏ Olfactory input => place fields not round, because input is complex (gradients not well structured)‏ Combined input is a mixture of both

Place field remapping

Hier weiter

Maps of More Abstract Spaces

Visual cortex

Visual Cortex Primary visual cortex or striate cortex or V1. Well defined spacial representation of retina (retinotopy).

Visual Cortex Primary visual cortex or striate cortex or V1. Well defined spacial representation of retina (retinotopy). Prestriate visual cortical area or V2 gets strong feedforward connection from V1, but also strongly projects back to V1 (feedback)‏ Extrastriate visual cortical areas V3 – V5. More complex representation of visual stimulus with feedback from other cortical areas (eg. attention).

Receptive fields Cells in the visual cortex have receptive fields (RF). These cells react when a stimulus is presented to a certain area on the retina, i.e. the RF. Simple cells react to an illuminated bar in their RF, but they are sensitive to its orientation (see classical results of Hubel and Wiesel, 1959). Bars of different length are presented with the RF of a simple cell for a certain time (black bar on top). The cell's response is sensitive to the orientation of the bar.

On-Off responses Experiment: A light bar is flashed within the RF of a simple cell in V1 that is recorded from. Observation: Depending on the position of the bar within the RF the cell responds strongly (ON response) or not at all (OFF response).

On-Off responses Experiment: A light bar is flashed within the RF of a simple cell in V1 that is recorded from. Observation: Depending on the position of the bar within the RF the cell responds strongly (ON response) or not at all (OFF response). Explanation: Simple cell RF emerges from the overlap of several LGN cells with center surround RF.

Columns Experiment: Electrode is moved through the visual cortex and the preference direction is recorded. Observation 1: Preferred direction changes continuously within neighboring cells.

Columns Experiment: Electrode is moved through the visual cortex and the preference direction is recorded. Observation 1: Preferred direction changes continuously within neighboring cells. Observation 2: There are discontinuities in the preferred orientation.

2d Map Colormap of preferred orientation in the visual cortex of a cat. One dimensional experiments like in the previous slide correspond to an electrode trace indicated by the black arrow. Small white arrows are VERTICES where all orientations meet.

Ocular Dominance Columns The signals from the left and the right eye remain separated in the LGN. From there they are projected to the primary visual cortex where the cells can either be dominated by one eye (ocular dominance L/R) or have equal input (binocular cells).

Ocular Dominance Columns The signals from the left and the right eye remain separated in the LGN. From there they are projected to the primary visual cortex where the cells can either be dominated by one eye (ocular dominance L/R) or have equal input (binocular cells). White stripes indicate left and black stripes right ocular dominance (coloring with desoxyglucose).

Ice Cube Model Columns with orthogonal directions for ocularity and orientation. Hubel and Wiesel, J. of Comp. Neurol., 1972

Ice Cube Model Columns with orthogonal directions for ocularity and orientation. Problem: Cannot explain the reversal of the preferred orientation changes and areas of smooth transitions are overestimated (see data). Hubel and Wiesel, J. of Comp. Neurol., 1972

Graphical Models Preferred orientations are identical to the tangents of the circles/lines. Both depicted models are equivalent. Vortex: All possible directions meet at one point, the vortex. Problem: In these models vortices are of order 1, i.e. all directions meet in one point, but 0° and 180° are indistinguishable. Braitenberg and Braitenberg, Biol.Cybern., 1979

Graphical Models Preferred orientations are identical to the tangents of the circles/lines. Both depicted models are equivalent. Vortex: All possible directions meet at one point, the vortex. Problem: In these models vortices are of order 1, i.e. all directions meet in one point, but 0° and 180° are indistinguishable. From data: Vortex of order 1/2. Braitenberg and Braitenberg, Biol.Cybern., 1979

Graphical Models cont'd In this model all vertices are of order 1/2, or more precise -1/2 (d-blob) and +1/2 (l-blob). Positive values mean that the preferred orientation changes in the same way as the path around the vertex and negative values mean that they change in the same way. Götz, Biol.Cybern., 1988

NI continue here

Developmental Models Start from an equal orientation distribution and develop a map by ways of a developmental algorithm. Are therefore related to learning and self-organization methods.

Model based on differences in On-Off responses KD Miller, J Neurosci. 1994

Resulting receptive fields Resulting orientation map Difference Corre- lation Function

Biol End here

Self Organizing Maps (SOM) for creating orientation maps Unsupervised Learning for creating neighborhood preserving self-organizing structures

Hebbian learning When an axon of cell A excites cell B and repeatedly or persistently takes part in firing it, some growth processes or metabolic change takes place in one or both cells so that A‘s efficiency, as one of the cells firing B, is increased. Donald Hebb (1949) A B A t B

Learning rules dw Basic Hebb rule = m v u dt dw = m <v u> Averaged rule = m (v - Q) u dw dt Covariance rule = m v u (v - Q) dw dt BCM rule = m (vu – dw dt v(n.u)n N ) Subtractive norm. = m (vu – a v2w), a>0 dw dt Multiplicative norm. Oja’s rule

Self Organizing Maps T. Kohonen Dr. Eng., Emeritus Professor of the Academy of Finland His research areas are the theory of self-organization, associative memories, neural networks, and pattern recognition, in which he has published over 300 research papers and four monography books. T. Kohonen (1995), Self-Organizing Maps.

SOM – What is it? The most popular ANN algorithm in the unsupervised learning category Converts relationships between high-dimensional data items into simple geometric relationships on a low-dimensional display Compresses information while preserving the most important topological and metric relationships of the primary data items Data visualization, feature extraction, pattern classification, adaptive control of robots, etc.

Vector quantization (VQ) Signal approximation method that forms an approximation to the probability density function p(x) of stochastic variable x using a finite number of so-called codebook vectors (reference vectors) wi, i=1, 2,…,k. Finding closest reference vector wc: c = arg mini {||x-wi||}, where, ||x-wi|| - Euclidian norm Reference vector wi Voronoi set

∫ VQ: Optimization E = ||x-wc||2p(x)dx dwi Average expected square of quantization error: E = ||x-wc||2p(x)dx ∫ For every x, with occurrance probability given via p(x), we calculate the error how good some wc would approximate x and then integrate over all x to get the total error. Gradient descent method: = a dci (x-wi), dci – Kronecker delta (=1 for c=i, 0 otherwise) dwi dt Gradient descent is used to find those wc for which the error is minimal.

SOM: Feed-forward network … … W … X

SOM: Components Inputs: x Weights: w X=(R,G,B) is a vector! Of which we have six here. Inputs: x Weights: w We use 16 codebook vectors (you can choose how many!)

SOM: Algorithm Initialize map (weights) Select a sample (input) Determine neighbors Change weights Repeat from 2 for a finite number of steps

SOM: possible weight initialization methods Random initialization Using initial samples Ordering

SOM: determining neighbors Hexagonal grid Rectangular grid

SOM: Gaussian neighborhood function hci= exp (- ||rc-ri||2 2st2 )

SOM: Learning rule Gradient-descent method for VQ: SOM learning rule: = a dci (x-wi), dci – Kronecker delta (=1 for c=i, 0 otherwise) dwi dt SOM learning rule: = at hci (x-wi), 0<a<1, hci – neighbourhood function dwi dt

SOM: Learning rate function Linear: at=a0(1-t(1/T)) Inverse-of-time: at=a/(t+b) Power series: at=a0(a0/aT)t/T at a0 – initial learning rate aT – final learning rate a, b – constants Time (steps)

SOM: Weight development ex.1 Eight Inputs 40x40 codebook vectors W Neighborhood relationships are usually preserved (+) Absolute structure depends on initial condition and cannot be predicted (-)

SOM: Weight development wi Time (steps)

SOM: Weight development ex.2 100 Inputs 40x40 codebook vectors W

SOM: Examples of maps Bad, some neighbors stay apart! Good, all neighbors meet! Bad Bad Bad cases could be avoided by non-random initialization!

SOM: weight development ex.3

SOM: Calculating goodness of fit Average distance to neighboring cells: 1 r ∑ir dj = ||wj-wi|| Where i=1…r, r is the number of neighboring cells, and j=1…N, N is the number of reference vectors w. The „amount of grey“ measures how good neighbors meet. The less grey the better!

SOM: Examples of grey-level maps Worse Better

SOM: Classification X W 1) Use input vector: 2) Do SOM: 3) Take example: W 2) Do SOM: 4) Look in SOM Map who is close to example! 5) Here is your cluster for classification! Some more examples

Biological SOM model Biological SOM equation dwi = at hci (x-wiwiTx), 0<a<=1, hci – neighbourhood function dwi dt Riccati-type learning = a(x – bw(wTx)) dw dt Oja’s rule = m (vx – a v2w), a>0 dw dt

Orientation columns in the visual cortex

Auditory Maps Auditory information (air pressure fluctuation) undergo a complex cascade or transformation before it reaches the brain. How is the temporal structure of a signal represented in the brain? The cochlea breaks signal down into frequency components.

Short Excursion: The Spectrum Every temporal signal can be characterized by its spectrum. The spectrum contains frequency components. Important mathematical tool: Fourier Transform! - Pure tone => only one frequency t A sin(2pi*wt)‏ A w

Short Excursion: The Spectrum Every temporal signal can be characterized by its spectrum. The spectrum contains frequency components. Important mathematical tool: Fourier Transform! - Pure tone => only one frequency - Superposition of pure tones => all pure tone frequencies - Square wave => infinite discrete frequencies with decreasing amplitudes - Non periodic signals => continuous spectrum

Difference between pitch and frequency Amplitude Frequency Amplitude Frequency Amplitude Frequency Both signal have different spectra but the same period (black arrow). The higher frequency components in the lower spectrum are called harmonics. Amplitude Frequency

Difference between pitch and frequency All four signals have different frequency spectra and therefore sound differently. The first three signals have the same period and therefore the same perceived pitch. The pitch of the fourth signal is higher than the rest, but the sound is similar to the sound of the third signal, since the harmonics are similar. Note: The pitch of signal 3 and 4 corresponds to the dashed red line. This frequency is not contained in the spectrum. Amplitude Frequency

Steps of signal transduction (simplified)‏ 1. Cochlea: Spectral and temporal information transmitted via auditory nerve to 2. Cochlear Nucleus: Temporal structure of signal (coincidence detectors – temporal difference between left and right ear < 10μs)‏ 3. Inferior Colliculus (IC): Two types of cells – cells with narrow frequency band width and cells with high temp. resolution => spacial map of spectral-temporal information. 4. Cortex: Orthogonal Map of frequency content (Tonotopy) and pitch (Periodotopy)‏

Neuronal Analysis of Periodicity Coincidence neuron (red) receives two inputs: 1. From stellate cells (orange, oscillator neurons) that are locked to the signal and from 2. fusiform cells (blue, integrator neurons) that respond with a delay. Both types are triggered by Trigger neuron on-cells (greenish). Remember the lecture on correlations where we also used a delay line ( there for azimuth estimation).

Neuronal Analysis of Periodicity Coincidence neuron (red) receives two inputs: 1. From stellate cells (orange, oscillator neurons) that are locked to the signal and from 2. fusiform cells (blue, integrator neurons) that respond with a delay. Both types are triggered by Trigger neurons on-cells (greenish). When the delay corresponds to the signal period, the delayed and non-delayed response coincide (red bar). This network explains pitch selectivity of neurons in the inferior colliculus. The neuron also corresponds to harmonics, if it is not inhibited VNLL (purple).

Spacial representation of timbre and pitch Bandpass Rectifier Cochlear acts like a filter bank with parallel channels (blue). Hair cells rectify the signal.

Spacial representation of timbre and pitch Bandpass DCN Integrator Rectifier Cochlear acts like a filter bank with parallel channels (blue). Hair cells rectify the signal. Dorsal chochlear nucleus (green, DCN) transfers periodic signals with different delays.

Spacial representation of timbre and pitch Bandpass VCN Oscillator DCN Integrator Rectifier Cochlear acts like a filter bank with parallel channels (blue). Hair cells rectify the signal. Dorsal chochlear nucleus (green, DCN) transfers periodic signals with different delays. Ventral chochlear nucleus (green, VCN) transfers periodic signals without delays.

Spacial representation of timbre and pitch Bandpass VCN Oscillator DCN Integrator IC Coincidence Detection Rectifier Cochlear acts like a filter bank with parallel channels (blue). Hair cells rectify the signal. Dorsal chochlear nucleus (green, DCN) transfers periodic signals with different delays. Ventral chochlear nucleus (green, VCN) transfers periodic signals without delays. Coincidence neurons in the inferior colliculus (yellow, IC) respond best whenever the delay in their DCN input is compensated by the signal period.

Layer model of orthogonal representation of pitch Integration Neuron Layer model of orthogonal representation of pitch and frequency in the IC Each of the 5 depicted layers (total ~30) is tuned to a narrow frequency band and a large periodicity range (values on the left from cats)‏ Each lamina has a frequency gradient for tonotopic fine structure orthogonal to pitch Response to a signal with three formants (three different frequency components)‏ Orthogonal connections between layers are assumed to integrate pitch information (red arrow).

Layer model of orthogonal representation of pitch Integration Neuron Layer model of orthogonal representation of pitch and frequency in the IC Each of the 5 depicted layers (total ~30) is tuned to a narrow frequency band and a large periodicity range (values on the left from cats)‏ Each lamina has a frequency gradient for tonotopic fine structure orthogonal to pitch Response to a signal with three formants (three different frequency components)‏ Orthogonal connections between layers are assumed to integrate pitch information (red arrow). Response of brain slice to pure tones from 1 kHz to 8 kHz

Layer model of orthogonal representation of pitch Integration Neuron Layer model of orthogonal representation of pitch and frequency in the IC Each of the 5 depicted layers (total ~30) is tuned to a narrow frequency band and a large periodicity range (values on the left from cats)‏ Each lamina has a frequency gradient for tonotopic fine structure orthogonal to pitch Response to a signal with three formants (three different frequency components)‏ Orthogonal connections between layers are assumed to integrate pitch information (red arrow). LOG(pitch)‏ Response to 3 harmonic signals with pitches (50, 400, 800)Hz and frequency ranges of (0.4-5, 2-5, 3.2-8)kHz (white rectangles). Vertical bands correspond to log arrangement of fundamental frequencies. Response of brain slice to pure tones from 1 kHz to 8 kHz

Orthogonality of frequency and pitch in humans MEG investigation in humans using stimuli with pitch ranging from 50 – 400 Hz (red and purple diamonds) and frequencies ranging from 200 – 1600 Hz (black points). Each point marks the position of maximum cortical activity in a 2ms window (5 points = 10ms), 100ms after the signal is switched on. Tonotopical and periodotopical axes can be defined which are orthogonal to each other. Position of the response along the tonotopic axis corresponds to the lower cut-off frequency of the broadband harmonic sounds (red 400Hz, purple 800Hz).

Orthogonality of frequency and pitch in humans MEG investigation in humans using stimuli with pitch ranging from 50 – 400 Hz (red and purple diamonds) and frequencies ranging from 200 – 1600 Hz (black points). Each point marks the position of maximum cortical activity in a 2ms window (5 points = 10ms), 100ms after the signal is switched on. Tonotopical and periodotopical axes can be defined which are orthogonal to each other. Position of the response along the tonotopic axis corresponds to the lower cut-off frequency of the broadband harmonic sounds (red 400Hz, purple 800Hz). Our ability to differentiate spoken and musical sounds is based on the fact that our hearing splits up signals into frequencies, pitches and harmonics in such a way that spectral and temporal information can be mapped to the cortex very reliably.