N.s. Behavioral data was collected from 46 rats over 4661 total daily training sessions. Methods All animals performed Go/ No-Go discrimination tasks in.

Slides:



Advertisements
Similar presentations
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Advertisements

All slides © S. J. Luck, except as indicated in the notes sections of individual slides Slides may be used for nonprofit educational purposes if this copyright.
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Early auditory novelty processing in humans: auditory brainstem and middle-latency responses Slabu L, Grimm S, Costa-Faidella J, Escera C.
Lectures 14: Instrumental Conditioning (Basic Issues) Learning, Psychology 5310 Spring, 2015 Professor Delamater.
Instrumental Conditioning Also called Operant Conditioning.
Visual Attention Attention is the ability to select objects of interest from the surrounding environment A reliable measure of attention is eye movement.
Speech perception 2 Perceptual organization of speech.
Kevin Otto Research: More than 28 million Americans have some form of hearing loss. Deafness etiology generally mandates treatment option. Treatment options.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Exam 1 Monday, Tuesday, Wednesday next week WebCT testing centre Covers everything up to and including hearing (i.e. this lecture)
Lecture 18&19: Stimulus Control (Pavlovian & Instrumental) Learning, Psychology 5310 Spring, 2015 Professor Delamater.
Copyright 2008 by User Interface Lab Industrial Engineering Dept. of Industrial Systems & Information Engineering Korea University Serial Modules in Parallel.
1 Pattern Recognition (cont.). 2 Auditory pattern recognition Stimuli for audition is alternating patterns of high and low air pressure called sound waves.
The auditory cortex mediates the perceptual effects of acoustic temporal expectation Santiago Jaramillo & Anthony M Zador Cold Spring Harbor Laboratory,
Auditory cortex spatial sensitivity sharpens during task performance Chen-Chung Lee 1,2 & John C Middlebrooks 2 Nature Neuroscience 12 December 2010; 1.
Cross-Spectral Channel Gap Detection in the Aging CBA Mouse Jason T. Moore, Paul D. Allen, James R. Ison Department of Brain & Cognitive Sciences, University.
Discrimination-Shift Problems Background This type of task has been used to compare concept learning across species as well as across a broad range of.
Chapter 5. Sound Intensity (db) = 20 log (P1/P2)
By: Li Xiao nature neuroscience 1 August 2004.
Vowel formant discrimination in high- fidelity speech by hearing-impaired listeners. Diane Kewley-Port, Chang Liu (also University at Buffalo,) T. Zachary.
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
B.F. SKINNER - "Skinner box": -many responses -little time and effort -easily recorded -RESPONSE RATE is the Dependent Variable.
Mind and Brain Presented by: Sarah C. Bradshaw. Contributing Sciences “The fields of neuroscience and cognitive science are helping to satisfy this fundamental.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Neural Plasticity: From Homeostasis to Speech Mike Kilgard Associate Professor University of Texas at Dallas.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Michael P. Kilgard Sensory Experience and Cortical Plasticity University of Texas at Dallas.
Michael P. Kilgard Sensory Experience and Cortical Plasticity University of Texas at Dallas.
Speech Sound Processing in the Brain Mike Kilgard University of Texas at Dallas.
Adaptive Design of Speech Sound Systems Randy Diehl In collaboration with Bjőrn Lindblom, Carl Creeger, Lori Holt, and Andrew Lotto.
Speech Acoustics1 Clinical Application of Frequency and Intensity Variables Frequency Variables Amplitude and Intensity Variables Voice Disorders Neurological.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Sensory Experience Alters Response Strength, Selectivity and Temporal Processing of Auditory Cortex Neurons Mike Kilgard University of Texas at Dallas.
LEARNING Learning - process leading to relatively permanent behavioral change or potential behavioral change.
CP-154,526, a CRF type-1 receptor antagonist, attenuates the cue-and methamphetamine-induced reinstatement of extinguished methamphetamine- seeking behavior.
Speech Perception 4/4/00.
Learning and Memory in the Auditory System The University of Iowa Department of Psychology Behavioral and Cognitive Neuroscience Amy Poremba, Ph.D. 1.
Gamma-Band Activation Predicts Both Associative Memory and Cortical Plasticity Drew B. Headley and Norman M. Weinberger Center for the Neurobiology of.
Chapter 5: Normal Hearing. Objectives (1) Define threshold and minimum auditory sensitivity The normal hearing range for humans Define minimum audible.
Frank E. Musiek, Ph.D., Jennifer Shinn, M.S., and Christine Hare, M. A.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Introduction The Effects of Reward Quality on Risk-sensitive Foraging Craft*, B.B., Church, A.C., Rohrbach, C.M., & Bennett, J.M. All data were analyzed.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Signal detection theory Appendix Takashi Yamauchi Texas A&M University.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
Columbia University Amy Hale Won Yung Choi Yaniv Eyny Johannes Schwaninger Princeton University Barry Jacobs, Ph.D. Tripp Stewart.
Does delay length or sequence exposure affect repeated acquisition performance? A.R. Johnson, K.A. Reed, B.R. Gomer, M.A. Vanden Avond, E.J. Hendrickson,
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Psychophysics and Psychoacoustics
Pattern Classification of Attentional Control States S. G. Robison, D. N. Osherson, K. A. Norman, & J. D. Cohen Dept. of Psychology, Princeton University,
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Basic cognitive processes - 1 Psych 414 Prof. Jessica Sommerville.
Three animals (F3, F6, G13) developed high levels of accuracy and showed rapid acquisition during baseline sessions. Three animals (F3, F6, G13) have shown.
Speech Perception.
Nuclear Accent Shape and the Perception of Syllable Pitch Rachael-Anne Knight LAGB 16 April 2003.
Outline of Lecture I.Intro to Signal Detection Theory (words) II.Intro to Signal Detection Theory (pictures) III.Applications of Signal Detection Theory.
PERCEPTUAL LEARNING AND CORTICAL SELF-ORGANIZATION Mike Kilgard University of Texas Dallas.
Auditory Perception 1 Streaming 400 vs. 504 Hz 400 vs. 566 Hz 400 vs. 635 Hz 400 vs. 713 Hz A 400-Hz tone (tone A) is alternated with a tone of a higher.
Becoming Extinct. Extinction of Conditioned behavior Extinction is a typical part of the learning process – Pavlovian or Instrumental contingency can.
Precedence-based speech segregation in a virtual auditory environment
as Specific and Complementary Training in the
Figure 1. In utero RNAi of Kiaa0319 (KIA−) caused delayed speech-evoked LFPs in both awake and anesthetized rats. LFPs in panels (A) and (C) were created.
Ana Alves-Pinto, Joseph Sollini, Toby Wells, and Christian J. Sumner
Shih-Chieh Lin, Miguel A.L. Nicolelis  Neuron 
Adaptive Training Diminishes Distractibility in Aging across Species
Wallis, JD Helen Wills Neuroscience Institute UC, Berkeley
Volume 70, Issue 1, Pages (April 2011)
Presentation transcript:

n.s. Behavioral data was collected from 46 rats over 4661 total daily training sessions. Methods All animals performed Go/ No-Go discrimination tasks in the same operant training booths. The stimuli used as CS+ and CS- varied among different training tasks, but the general timeline of training and procedures were the same for all tasks. Stages of Training Shaping Goal: Animals learn to press lever to receive food rewards Time Course: ~ 6 sessions (3 days). Continued until 3 sessions in which animals retrieved 50 pellets independently. Detection Training Goal: Animals learn to press to receive rewards only after they hear a sound. Animals must learn to avoid hitting during silent periods. Time course: ~30 sessions (3 weeks). Continued until 10 sessions in which animals respond significantly more to CS+ trials than to catch trials (ie, d’ >= 1.5 for 10 sessions) Discrimination Training Goal: Animals learn to press to receive rewards only after they hear a CS+. Animals must avoid hitting after CS- stimuli or during silent periods. Time course: Variable (20 – 300 sessions). Sequence Discrimination: Animals trained for up to 300 sessions or until behavioral criteria were met. Speech Discrimination: Animals progressed through a variety of CS- conditions after 2 weeks of training on each task, regardless of behavioral performance. Complex Sound Discrimination Abilities in Rats and the Effects of Multiple Training Manipulations A.C. Puckett, C.T. Novitski, N.D. Engineer, A.L. McMenamy, M.S. Perry, C.A. Perez, P. Kan, Y.H. Chen, V. Jakkamsetti, C.L. Heydrick, M.P. Kilgard, The University of Texas at Dallas, Richardson, TX Introduction The current experiments have examined the ability of rats to perform discriminations among complex sounds, including tone-noise sequences and speech stimuli. Understanding how normal animals are able to discriminate complex sounds is necessary in order to fully understand the auditory cortex and how injury, learning or plasticity can change perception and cortical functioning. SPEECH DISCRIMINATIONSEQUENCE DISCRIMINATION Conclusions Frequency discriminations are easy, but other sequence discriminations are difficult. Onsets are the most salient elements of sequences. Sequences beginning with the same element are difficult to discriminate. Reversed sequence discrimination was impossible, despite different initial elements. Sequences may be normally processed as a ‘unit’ rather than discrete elements. Discrimination strategies may be changed by training. If rats learn a poor strategy early in training, they will not learn to discriminate effectively. Intermediate exemplars allowed animals to adopt effective strategies. Conclusions Large spectral differences are easy for rats to discriminate, even if differences are not in the onset of the speech sound. Onsets/consonants: /dad/ vs. /bad/ & /gad/ Middle of stimulus/vowels: /dad/ vs. /deed/ & /dood/ Subtle spectral differences are difficult for rats to discriminate (and are difficult for humans as well). Onsets/consonants: /rad/ vs. /lad/ Middle of stimulus/vowels: /dad/ vs. /dead/ & /dud/ Rats can generalize among several variants of a speech stimulus. Rats could generalize across several compressed exemplars, indicating that VOT wasn’t the only cue. Rats could generalize across several different speakers. Rats were able to generalize across many different fundamental frequencies, temporal patterns of enunciation, and other idiosyncrasies. Speech sounds seem to be more easily discriminated than tone-noise sequences. Speech Stimulus Creation Recording: Monosyllabic words spoken by female native English speaker in a sound-proof chamber. – –The words ‘dad’ and ‘tad’ were recorded from 5 other native English speakers (3 male, 2 female) to assess speaker generalization. Frequency shifting: The frequency of the fundamental and all other formants were shifted into the rat’s hearing range by doubling their frequency. – –Compressed versions of ‘dad’ and ‘tad’ were generated to assess temporal generalization. Noise reduction: Background noise was subtracted from each signal. Filtering: Each signal was filtered to correct for the frequency-response curve of the booth speaker. Intensity adjustment: The RMS-values of the signals were adjusted so that the loudest 100 ms of each vowel was 60 dB SPL. Future Directions – Assessment of changes in perceptual abilities after NB-stimulation pairing Amanda Puckett Measurement of changes in auditory cortex after long-term sequence training Dr. Navzer Engineer Measurement of responses of auditory cortex after long-term speech training Crystal Novitski Assessment of cortical processing of speech sounds after environmental enrichment Vikram Jakkamsetti Assessment of loss of perceptual abilities after cortical injury Dr. Owen Floody Information theory analysis of cortical responses to speech sounds Helen Chen Sounds are delivered from a speaker placed outside the cage. Speaker is positioned so that sounds are usually delivered to the animal’s left ear. L – Rat responds to sound stimuli by pressing on the lever. Only presses within 3 seconds of CS+ stimuli are rewarded. P – Pellet dispenser (MED Associates) located outside the sound-proof chamber delivers a 45 mg food reward (Bio-Serv) to the rat after correct responses. H – House light is extinguished after false alarms or late responses Apparatus dad vs. HLN vs. In Loving Memory of Matt Perry (March September 2005) In honor of the neuroscientist who delved into the mysteries and whims of life with wholehearted delight. Thank you to one who touched the minds he sought to understand. As a true inquirer into philosophy, pharmacology and literature, Mat enriched rats as well as people. It is an honor to have shared a common path with you. Acknowledgements We wish to thank all the members of the Kilgard Lab behavioral team. Research supported by NIH R21#1R15DC n.s. Rad Lad Dad Tad HLN NLH