Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory.

Slides:



Advertisements
Similar presentations
Gabor Filter: A model of visual processing in primary visual cortex (V1) Presented by: CHEN Wei (Rosary) Supervisor: Dr. Richard So.
Advertisements

1 Non-Linearities Linear systems (e.g., filters) can change the intensity and phase of a signal input. Non-linear systems (e.g., amplfiers) not only can.
Psychoacoustics Riana Walsh Relevant texts Acoustics and Psychoacoustics, D. M. Howard and J. Angus, 2 nd edition, Focal Press 2001.
Hearing and Deafness 2. Ear as a frequency analyzer Chris Darwin.
The peripheral auditory system David Meredith Aalborg University.
INTRODUCTION TO HEARING. WHAT IS SOUND? amplitude Intensity measured in decibels.
The Stimulus Input: Sound Waves
Audition. Sound Any vibrating material which can be heard.
A.Diederich– International University Bremen – Sensation and Perception – Fall Frequency Analysis in the Cochlea and Auditory Nerve cont'd The Perception.
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
$ pt 2: sensory input $ ch 2: echolocation in bats $ bat behavior $ decoding the acoustic environment $ hunting bats $ neural mechanisms $ moth responses.
Structure and function
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
Chapter 6: The Human Ear and Voice
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Hearing.
HEARING. Audition  What is Audition?  Hearing  What sounds do we hear the best?  Sounds with the frequencies in the range corresponding to the human.
Properties and Detection of Sound
AIM: How do we hear?. Opponent Process Theory Hering proposed that we process four primary colors combined in pairs of red-green, blue- yellow, and black-white.
Physics 1251 The Science and Technology of Musical Sound Unit 2 Session 12 MWF The Human Ear Unit 2 Session 12 MWF The Human Ear.
Hearing.
Abstract We report comparisons between a model incorporating a bank of dual-resonance nonlinear (DRNL) filters and one incorporating a bank of linear gammatone.
Sound is a pressure wave Figure by MIT OCW. After figure 11.1 in: Bear, Mark F., Barry W. Connors, and Michael A. Paradiso. Neuroscience: Exploring the.
EE Audio Signals and Systems Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
The Auditory System Dr. Kline FSU. What is the physical stimulus for audition? Sound- vibrations of the molecules in a medium like air. The hearing spectrum.
The Auditory Process. Stimulus  Distal Stimulus- in our environment produces a proximal stimulus  Proximal Stimulus- form of sound waves reaching the.
Michael P. Kilgard Sensory Experience and Cortical Plasticity University of Texas at Dallas.
1 Hearing or Audition Module 14. Hearing Our auditory sense.
By: Ellie Erehart, Angie Barco, Maggie Rieger, Tj Myers and Kameron Thomas.
Figure 13.1 The periodic condensation and rarefaction of air molecules produced by a tuning fork neuro4e-fig jpg.
1 PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, © 2007.
Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed.
The Ear.
9.6 Hearing and Equilibrium Pages The Ear Two separate functions: hearing and equilibrium Cilia: tiny hair cells that respond to mechanical stimuli.
1 PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, © 2007.
Sound/Hearing Sensation & Perception. Characteristics of Sound Frequency – corresponds to the perceptual term pitch.
IB Assessment Statements Option I-1, The Ear and Hearing: I.1.1.Describe the basic structure of the human ear. I.1.2.State and explain how sound pressure.
Hearing Physiology.
SOUND & THE EAR. Anthony J Greene2 Sound and the Ear 1.Sound Waves A.Frequency: Pitch, Pure Tone. B.Intensity C.Complex Waves and Harmonic Frequencies.
Chapter 11: Hearing.
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
Chapter 4 Sensation What Do Sensory Illusions Demonstrate? Streams of information coming from different senses can interact. Experience can change the.
By Sarita Jondhale 1 The process of removing the formants is called inverse filtering The remaining signal after the subtraction of the filtered modeled.
Vibrationdata 1 Unit 6a The Fourier Transform. Vibrationdata 2 Courtesy of Professor Alan M. Nathan, University of Illinois at Urbana-Champaign.
Lecture 2: Measurement and Instrumentation. Time vs. Frequency Domain Different ways of looking at a problem –Interchangeable: no information is lost.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Analysis of spectro-temporal receptive fields in an auditory neural network Madhav Nandipati.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
You better be listening… Auditory Senses Sound Waves Amplitude  Height of wave  Determines how loud Wavelength  Determines pitch  Peak to peak High.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Hearing. Anatomy of the Ear How the Ear Works The outer ear The pinna, which is the external part of the ear, collects sounds and funnels them through.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Chapter 4 Sensation and Perception. The Ear Audition = hearing Audition = hearing Sounds = mechanical energy typically caused by vibrating objects Sounds.
Hearing Aka: Audition. Frequency the number of complete wavelengths that pass through point at a given time. This determines the pitch of a sound.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Auditory System…What??? It plays an important role in language development and social interactions… Plus…it alerts us to dangerous situations! The auditory.
Hearing Module 14.
You better be listening…
Review: Hearing.
Hearing.
Audition (Hearing).
Hearing Aka: Audition.
Sensory Pathways Functions of sensory pathways: sensory reception, transduction, transmission, and integration For example, stimulation of a stretch receptor.
Sensation Notes 5-3 (obj 11-16)
Tuning in the basilar membrane
Contrast Gain Control in Auditory Cortex
How We Hear.
Hearing Aka: Audition.
EAR REVIEW.
Tuning to Natural Stimulus Dynamics in Primary Auditory Cortex
Presentation transcript:

Purpose The aim of this project was to investigate receptive fields on a neural network to compare a computational model to the actual cortical-level auditory processing. The receptive fields are also analyzed against traditional methods of characterizing neural models such as tuning curves.Background Layout of the ear The ear is the earliest stage of auditory processing. The ear is divided into three main areas: the outer ear, the middle ear, and the inner ear. Transduction, the process of converting mechanical signals into electrical potentials, takes place in the inner ear. The vibrations in the inner ear selectively cause hair cells along the basilar membrane in the cochlea to move. The motion of the hair cells allows electrical potentials to travel to the auditory nerve and become processed by the brain. Hair cells are theorized to be frequency-selective: specific pitches excite specific areas of the basilar membrane. This layout of the ear makes it convenient to represent sound as a function of frequency and time, instead of a function of pressure and time. Spectro-temporal receptive fields (STRFs) STRFs represent the linear properties of primary auditory processing neurons and depict the neuronal impulse response characterizations at frequency-time. STRFs are generated by collecting a neuron's responses to different moving ripple stimuli. Since these stimuli are approximate components of complex sounds, the STRFs characterize the neuron response to spectro-temporally rich sound stimuli. Since STRFs describe the neuronal responses in both the spectral and temporal dimensions, they are hypothesized to be more useful than traditional methods of describing neurons such as tuning curves. Tuning curves Tuning curves have been used extensively in both biological and computational applications because they allow researchers to quantitatively analyze the frequencies at which a specific auditory neuron responds best. To generate these curves, the neuronal response to pure tones varied across the frequency domain are collected. The maximum response to each tone was plotted in a intensity vs. frequency plot, and the peak of the plotted curve denotes the best frequency (BF) of the artificial neuron. The neurons respond with the greatest intensity to tones that match their BF and with decreasing intensity to tones away from their BF. Oja's rule Unsupervised learning paradigms allow neural network models to dynamically modify their own weighted connections between nodes, analogous to the changes in synaptic plasticity between neurons. Oja's rule, one type of unsupervised learning algorithm, can be shown as: where represents weight change between two units, is the current weight, is the learning rate, and and are the activation values of the pre-synaptic and post-synaptic neurons, respectively. Methods Neural network Neural networks mimic biological processing by joining layers of artificial neurons in a meaningful way. The neural network employed in this project is a two-layer model that responds to spectrograms, frequency vs. time distributions of sound. This model utilizes time-delayed inputs to approximate temporal processing. The first layer of the neural network contains the information from three consecutive timesteps. The auditory information from the first layer gets relayed to the second layer through a network of weights. The value of a second layer artificial neuron is: where is the weight matrix, is the input from the current timestep, k is the timestep input that appeared 12 ms ago, and is the timestep input that appeared 24 ms ago. Animals can only hear a very limited range of frequencies. In this project, neural network artificially simulates a limited range of hearing between and kHz to more accurately match the real world. Unsupervised training The neural network was trained on a simple set of frequency-modulated (FM) sweeps and pure tones. The model modified its own weights according to Oja's rule after each presentation of a timestep in the training set. Only one or two artificial neurons were trained at one time depending on the initial frequency of the training stimulus. Moving ripples The moving ripple stimuli are complex, broadband noises that are used to determine the STRFs of artificial neurons. They are composed of hundreds of densely packed, log-spaced pure tones that are sinusoidally modulated in the spectral and temporal domains. The ripple equation is given as: where is intensity at frequency-time points, is modulation depth, is the ripple velocity (Hz), is the ripple frequency (cycles/octave), and is the phase shift (radians). These ripple stimuli were varied across two parameters separately, the ripple velocity (Hz) and the ripple frequency (cycles/octave). The transfer function (TF) is a broad characterization of an artificial neuron's response to the various ripple stimuli and is defined by: where, is the response phase (radians), and is the response magnitude. A two-dimensional inverse Fourier transform function was performed on the transfer function in order to generate the desired STRF. Figure 1. The basilar membrane and frequency-selectivity. Image taken from IN n IN n+1 Input units Output unit t AN IN n+1 IN n IN n+1 IN n IN n+1 IN n IN n+1 Weights t-αt-β Figure 3. Schematic of neural network Figure 4. Generation of STRFs through ripples. Figure 2. STRFs from Mexican free-tailed bats. The image on the left shows the STRF from a neuron with blocked inhibition and the image on the right shows the STRF from a neuron with inhibition. Image taken from Spectrotemporal Receptive Fields in the Inferior Colliculus Revealing Selectivity for Spectral Motion in Conspecific Vocalizations by Andoni et al.