The causality here is pretty straightforward: Compression at the stapes footplate deforms the BM downward, rarefaction at the stapes footplate deforms.

Slides:



Advertisements
Similar presentations
Hearing and Deafness 1. Anatomy & physiology Chris Darwin Web site for lectures, lecture notes and filtering lab:
Advertisements

The Central Auditory System Functions: 1.High-level auditory abilities: “recognition, interpretation, integration” – e.g., speech recognition, speaker.
Perception Chapter 11: Hearing and Listening
Psychoacoustics Riana Walsh Relevant texts Acoustics and Psychoacoustics, D. M. Howard and J. Angus, 2 nd edition, Focal Press 2001.
Periodicity and Pitch Importance of fine structure representation in hearing.
Hearing over time Using the neural representation of the time waveform of sound.
Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.
HEARING Sound How the Ears Work How the Cochlea Works Auditory Pathway
Hearing and Deafness 2. Ear as a frequency analyzer Chris Darwin.
MIMICKING THE HUMAN EAR Philipos Loizou (author) Oliver Johnson (me)
The peripheral auditory system David Meredith Aalborg University.
Pitch Perception.
INTRODUCTION TO HEARING. WHAT IS SOUND? amplitude Intensity measured in decibels.
Sensation and Perception - audition.ppt © 2001 Laura Snodgrass, Ph.D.1 Audition Anatomy –outer ear –middle ear –inner ear Ascending auditory pathway –tonotopic.
Hair Cell Transduction. (from Kimura, 1966) Electron micrograph of 3 OHCs spiral ligamentmodiolus.
Cochlear Functions Transduction- Converting acoustical- mechanical energy into electro-chemical energy. Frequency Analysis-Breaking sound up.
A.Diederich– International University Bremen – Sensation and Perception – Fall Frequency Analysis in the Cochlea and Auditory Nerve cont'd The Perception.
Physiology of the cochlea Mechanical response of cochlea in response to sound Two major functions: 1. Analysis of sound into components: Frequency/Spectral.
Auditory System 1 1) Physical properties of sound
Structure and function
The causality here is pretty straightforward: Compression at the stapes footplate deforms the BM downward, rarefaction at the stapes footplate deforms.
Chapter 6: The Human Ear and Voice
Welcome To The Odditory System! Harry I. Haircell: Official Cochlea Mascot K+K+ AIR FLUID amplification.
Hearing.
Chapter 4 Powerpoint: Hearing
The Auditory System Sound is created by pressure waves in air; these waves are induced by vibrating membranes such as vocal cords. Because the membranes.
Hearing: physiology.
Hearing Part 2. Tuning Curve Sensitivity of a single sensory neuron to a particular frequency of sound Two mechanisms for fine tuning of sensory neurons,
CS 551/651: Structure of Spoken Language Lecture 10: Overview of Sound Perception John-Paul Hosom Fall 2010.
Hearing.
The Auditory System Dr. Kline FSU. What is the physical stimulus for audition? Sound- vibrations of the molecules in a medium like air. The hearing spectrum.
Mechanical Waves and Sound
Hearing Chapter 5. Range of Hearing Sound intensity (pressure) range runs from watts to 50 watts. Frequency range is 20 Hz to 20,000 Hz, or a ratio.
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Introductory Psychology Concepts Hearing.
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
Ch 111 Sensation & Perception Ch. 11: Sound, The Auditory System, and Pitch Perception © Takashi Yamauchi (Dept. of Psychology, Texas A&M University) Main.
1 Inner Ear Physiology 2 3 Transduction Tympanic membrane Acoustical/mechanical Oval window Mechanical/hydraulic Basilar & tectorial membrane Hydraulic/mechanical.
Sound Transduction 2 Or how my phase got all locked up Announcements: Now Online. Get assignments, lecture notes and other.
Hearing Physiology.
Chapter 11: Hearing.
AUDITION Functions: Adequate stimuli: Class # 12&13: Audition, p. 1.
Chapter 4 Sensation What Do Sensory Illusions Demonstrate? Streams of information coming from different senses can interact. Experience can change the.
By Sarita Jondhale 1 The process of removing the formants is called inverse filtering The remaining signal after the subtraction of the filtered modeled.
Hearing Sound and the limits to hearing Structure of the ear: Outer, middle, inner Outer ear and middle ear functions Inner ear: the cochlea - Frequency.
The Traveling Wave. Reminder 2 Frequency Amplitude Frequency Phase Frequency domain Time domain (time) waveform Amplitude spectrum.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Outline Of Today’s Discussion 1.Auditory Anatomy & Physiology.
Chap4. The auditory nerve Pronounced by Hwang semi.
PSY2301: Biological Foundations of Behavior The Auditory System Chapter 10.
The Central Auditory System
The causality here is pretty straightforward: Compression at the stapes footplate deforms the BM downward, rarefaction at the stapes footplate deforms.
Auditory System: Sound
Loudness level (phon) An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a.
Perceptual Constancies
Hair Cell Transduction
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
Cochlear implants Current Biology
Peripheral Auditory System
Loudness level (phon) An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a.
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
Presentation by Maliha Khan and Kevin Kemelmakher
Perceptual Constancies
Peripheral Auditory System
Tuning in the basilar membrane
The Neural Response and the Auditory Code
The Special Senses: Part D
How We Hear.
Cochlear implants Current Biology
EAR REVIEW.
Volume 106, Issue 11, Pages (June 2014)
Presentation transcript:

The causality here is pretty straightforward: Compression at the stapes footplate deforms the BM downward, rarefaction at the stapes footplate deforms the BM upward. (This is a closed system, so these pressure pulses are resolved by inward and outward movements at the round window, which is covered by the internal tympanic membrane.)

Durrant & Lovrinic (1995)

f = 3,000 Hz f = 300 Hz f = 1,000 Hz Note that differences in frequency can be seen in two ways: (1) by differences in the place along the BM with the greatest motion amplitude (this will become the basis of place theory/the place principle/tonotopic theory); (2) by differences in the rate at which the BM vibrates (this will become the basis of synchrony theory).

Suppose we were to mix two sinusoids together – one low frequency (e.g., 200 Hz) and the other of much higher frequency (e.g., 3000 Hz). What kind of traveling wave envelope pattern would we expect to see on the basilar membrane? 200 Hz, 3000 Hz 200 Hz + 3000 Hz

3000 Hz 200 Hz Figure 4-15. The basilar membrane varies continuously in stiffness from base to apex. The greater stiffness of the membrane at the base makes the basal end respond better to high frequencies than low frequencies, while the opposite is true of the apical end. After von Bekesy (1960), Rhode (1973), and Durrant & Lovrinic (1995).

Input: Complex optical signal with many frequencies mixed together. Output: Individual frequency components have been “unmixed” – low frequencies directed to one spatial location, high frequencies directed to a different spatial location, mid-frequencies directed to the appropriate place in the middle.

What should be happening at the 8th N for two signals that differ in frequency?

CF=Characteristic Frequency FRCs for 8th N fibers at two different locations along the basilar membrane. These two nerve fibers are said to have two different “characteristic frequencies” (CFs; sometimes “best frequencies” – BFs). One fiber responds best (has the highest firing rate) at 900 Hz, the other at 1700 Hz. This has nothing to do with the fiber itself – the differ-ences in CF occur because the 900 Hz fiber is connected to a hair cell that is closer to the apex than the 1700 Hz fiber. The differences in CF are due to the basilar membrane, not the fiber. CF=Characteristic Frequency (also Best Frequency) (Note: normalized firing rate means that the spontaneous discharge rate of the fiber has been subtracted out.)

Another way to measure cochlear frequency analysis: 8th N tuning curves. The threshold for the nerve fiber is measured for pure tones at different frequencies. Thresholds are lowest at the fiber’s CF. (2060: Skip) Figure 4-29. Neural tuning curves for auditory nerve fibers with three different characteristic frequencies (CFs). The threshold of the fiber is the lowest (i.e., sensitivity is greatest) at the characteristic frequency of the fiber. Data from Kiang and Moxon (1974).

What controls the CF of a fiber? Consider this thought experiment: Question: Suppose an afferent 8th N with a CF of 125 Hz were to be disconnected from the apical end of the BM and reconnected at the basal end. Now we re-measure the CF of the fiber. What will happen? The CF remains 125 Hz. Why should it matter where it’s located? If the CF is 125 at the apical end, and it’s really the same fiber (which it is), the CF will remain 125 Hz. The CF has to change to something much higher because now because the hair cells that stimulate the nerve to fire are connected to a section of the BM that is much stiffer than before (when it was connected at the apex) and therefore will respond best to high frequency vibration. It’s not the fiber that controls CF, it’s the stiffness of the section of the BM that it’s attached to. Not enough information. We need to know the sound level as well as the frequency.

Moral of the Story Although we refer to the CF of the fiber, CF is not determined by the fiber at all. CF is controlled by the stiffness of that section of the BM that is closest to the IHC that stimulates the 8thN fiber. An 8thN fiber that innervates an IHC connected near the base of the cochlea will have a high CF. An 8thN fiber that innervates an IHC connected near the apex of the cochlea will have a low CF. That’s all there is to the story: CF is controlled by the BM, not the fiber. The fibers are more-or-less interchangeable.

These are the FRCs we saw earlier These are the FRCs we saw earlier. They present the view of the cochlear as a bank of BP filters. For place theory to work, the filters must be numerous (closely spaced), and (b) narrow. We only see two in this figure, but there are ~3,000 of them, so we’re ok on numerous/closely spaced. Are the filters narrow enough? They look ok here, but notice that the presentation level of the input sinusoids for the data shown here is 45 dBSPL, which is quite low. Will these BP filters remain narrow as the presentation level is increased? The great Jerzy Rose and his buddies at Wisconsin addressed this question as well.

The previous figure showed FR curves for two fibers with different CFs, but for a very soft signal – 45 dBSPL. What happens to the shapes of FR curves at higher sound levels? Shown here are FR curves – measured at the 8th N – for a single fiber (CF=1700 Hz) at 8 different signal intensities from 25-95 dBSPL. This is the FRC we saw earlier – at 45 dBSPL. How do the shapes of these bandpass filters change with increases in sound level? Rose et al. (1971) Frequency response curves for a single auditory nerve fiber (CF=1700 Hz) at eight different signal intensities. Note that the frequency response curves are relatively narrow at low presentation levels but become very broad at higher intensities. Data and figure from Rose et al. (1971).

This question about the BW of the filters comprising the bandpass filter bank is a big deal. It lies at the heart of place theory/rate-place theory/tonotopic theory. Are the filters narrow enough to explain the exquisite frequency discrimination shown by listeners? They do not appear to be, especially at higher sound levels. The problem is unresolved. The problem is especially relevant in audiology because the cochlear implant is based on place theory. CIs definitely work, but why? It’s not obvious. Rose et al. (1971)

Synchrony coding (simplified version): The period of the 8th N pulse train matches the period of the original signal; e.g., if the signal has a freq of 100 Hz, the most common interspike interval on the 8th N will be 10 ms. High frequency = short interspike interval, low frequency = long interspike interval. What could be more straightforward?

So: (a) if the signal period is 10 ms (100 Hz), the most common interspike interval will be 10 ms; (b) if the signal period is 5 ms (200 Hz), the most common interspike interval will be 5 ms; (c) if the signal period is 1 ms (1000 Hz), the most common interspike interval will be 1 ms; (d) if the signal period is 0.5 ms (2000 Hz), the most common interspike interval will be 0.5 ms. Right? What’s wrong with this? Can the interspike interval be 0.5 ms?

Volley Theory (an elaboration of synchrony coding) Basic idea is pretty straightforward: Because of refractory intervals, no individual fiber can fire with a period equal to that of the input signal for a 2 kHz signal. Individual fibers catch a cycle, miss one or more, catch another one, miss a few, etc. The period of the input signal is not preserved on any individual fiber, but it is reflected in the most common interspike interval of a population of fibers.

Volley Theory Bus Analogy (from Peter Dallos, I think)

8th N voltage Input signal f = 217 Hz, t = ~4.6 ms Aside: Note the differences in amplitude, which might seem contrary to the all-or-none principle. They don’t contradict all-or-none. The amp differences are real enough, but they have no information value. Post-stimulus Time (PST) Histogram for sinusoids at 599 and 217 Hz.

8th N voltage Input signal f = 217 Hz, t = ~4.6 ms Aside (cont’d): All of the information in neural pulse trains is conveyed by: (a) the rate of occurrence, (spikes per second) and (b) the time of occurrence. Neural pulses are treated by the CNS like switches that are either on of off. Post-stimulus Time (PST) Histogram for sinusoids at 599 and 217 Hz.

Another View of the Volley Principle Both of these pictures of volley theory are oversimplified since they show the neuron always firing at the time when the signal amplitude reaches a peak. This is not the case since the entire hair cell-nerve fiber relationship is probabilistic rather than deterministic. The point of maximum amplitude is the time when the probability of a pulse is greatest (though not guaranteed). However, if the fiber is most likely to fire at the amplitude peak, the most common interspike interval (of a population of fibers) will equal the period of the input signal.

Best Picture Yet of the Volley Principle Note that this figure accurately shows that the 8th N pulses do not all occur a the waveform peak; they are just more likely there. Because of this, the most frequent inter-spike interval will correspond to the period of the input signal.

Again with the Volley Principle This figure shows the output of just 4 nerve fibers. There are ~30,000 8N fibers in humans. So, even though there is a random element to 8N firing, you can do a lot of ensemble averaging to find the most common inter-spike interval. The behavior of any individual fiber cannot be predicted, but the behavior of a large population of fibers IS predictable.

Another Thought Experiment Cochlea is unrolled and the auditory nerve is exposed. Patient is awake and able to tell us what he hears. We have a stimulating electrode and can deliver any pattern of electrical current at any rate to any of these nerve fibers – alone or in combination. According to place theory, how would we artificially induce a sensation of high pitch; i.e., fool the listener into thinking he’d heard a high frequency sound? How about low pitch? According to synchrony theory, how would we artificially induce a sensation of high pitch; i.e., fool the listener into thinking he’d heard a high frequency sound? cochlear branch of 8th N: ~30,000 fibers to brainstem

Place Theory & Synchrony Theory So, we have a place or tonotopic code: Frequency is coded by the place along the BM where 8th N electrical activity is greatest . We also have a synchrony code (with the volley principle tacked on to make it work even above the limits imposed by refractory intervals) based on the timing of 8th N pulses: Frequency is coded by the interspike interval of a population of fibers (short interspike interval=high freq; long interspike interval=low freq). Is one of these theories right and the other one wrong? Probably not. Commonly Held View ~15 to ~400 Hz: Mainly synchrony ~400 to ~4000-5000: Both place & synchrony Above ~4000-5000: Only place

Pulse Coding on the Auditory Nerve Rose et al. (1971)

Pulse Coding on the Auditory Nerve Rose et al. (1971) FIG. 11. Period histograms for a fiber responding to a complex periodic sound when the sound pressure level of both primaries is successively raised in lo-dB steps. Tones are locked in a ratio of 3:4. Period of complex sound = 4,998 psec. Each bin = 100 psec. Phase angle and amplitude of each primary used in construction of the fitted waveform are specified for each graph. Ra = amplitude ratio. Each histogram based on two stimulus presentations. Stimulus duration: 10 sec.

Youtube video http://www.youtube.com/watch?v=PeTriGTENoc (Clicking on the link should work. If not: (a) copy/paste link into your browser, or (b) search Youtube for “auditory transduction”)