Presentation is loading. Please wait.

Presentation is loading. Please wait.

Auditory Cortex 2 Sept 27, 2017 – DAY 13

Similar presentations


Presentation on theme: "Auditory Cortex 2 Sept 27, 2017 – DAY 13"— Presentation transcript:

1 Auditory Cortex 2 Sept 27, 2017 – DAY 13
Brain & Language LING NSCI Harry Howard Tulane University

2 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Course organization Fun with I am still working on grading.

3 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Review

4 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Anatomy

5 Tonotopy from cochlea to core
27-oct-17 Brain & Language - Harry Howard - Tulane University Tonotopy from cochlea to core

6 Where is auditory cortex?
27-oct-17 Brain & Language - Harry Howard - Tulane University Where is auditory cortex?

7 BA 41 actually curves down & onto medial surface
27-oct-17 Brain & Language - Harry Howard - Tulane University BA 41 actually curves down & onto medial surface

8 A1 = core, A2 = belt, A3 = parabelt
27-oct-17 Brain & Language - Harry Howard - Tulane University A1 = core, A2 = belt, A3 = parabelt

9 It is the core that preserves tonotopy
27-oct-17 Brain & Language - Harry Howard - Tulane University It is the core that preserves tonotopy

10 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Speech is different

11 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Speech is different! Speech perception is different from other forms of auditory perception because its targets are linked to a specialized system for their production … … which we might engage when we listen to speech. Speech mode hypothesis

12 Motor theory of speech perception, or Speech module (old speech mode)
27-oct-17 Brain & Language - Harry Howard - Tulane University Motor theory of speech perception, or Speech module (old speech mode) Speech perception was assumed to link to … the invariant movements of speech articulators, then the invariant motor commands sent to muscles to move the vocal tract articulators, then a more general phonetic 'gesture'.

13 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University The big picture

14 Test: Voice-onset time (VOT)
27-oct-17 Brain & Language - Harry Howard - Tulane University Test: Voice-onset time (VOT)

15 We tend to perceive things on a continuum
27-oct-17 Brain & Language - Harry Howard - Tulane University We tend to perceive things on a continuum

16 VOT is perceived categorically
27-oct-17 Brain & Language - Harry Howard - Tulane University VOT is perceived categorically perception of speech in vot voice onset time lisker abramson 1970

17 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University By the way … Chinchillas do this too!

18 Weak vs. strong versions of SMH
27-oct-17 Brain & Language - Harry Howard - Tulane University Weak vs. strong versions of SMH The speech mode hypothesis Strong: when we listen to speech, we engage perceptual mechanisms specialized for speech. Weak: when we listen to speech, we engage our knowledge of language. More recent: speech production can be engaged for 'hard' tasks, like dealing with bits of words like single syllables, eg VOT. for other, 'easier' tasks, it may not be necessary.

19 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Auditory cortex 2

20 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Dichotic listening Dichotic listening is a procedure commonly used for investigating selective attention in the auditory domain. Two messages are presented to both the left and right ears (one message to each ear), normally using a set of headphones. Normally, participants are asked to pay attention to either one, or both (divided attention condition) of the messages and may later be asked about the content of both. As Ivry and Robertson, 1998, p. 25, put it,

21 Dichotic listening to speech
27-oct-17 Brain & Language - Harry Howard - Tulane University Dichotic listening to speech Strong right-ear advantage Weak right-ear advantage No right-ear advantage stops (p,b,t,d,k,g) liquids (l,r), glides (y,w), fricatives (f,v,θ,ð,s,z,ʃ,ʒ) vowels short duration, fast change medium duration long duration

22 Magnetic Resonance Imaging (MRI)
27-oct-17 Brain & Language - Harry Howard - Tulane University Magnetic Resonance Imaging (MRI) In 1977, a team lead by Raymond Damadian produced the first image of the interior of the human body with a prototype device using nuclear magnetic resonance. Damadian’s device uses liquid helium to supercool magnets in the walls of a cylindrical chamber. A subject is introduced into the chamber and so exposed to a powerful magnetic field. This magnetic field has a particular effect on the nuclei of hydrogen atoms in the water which all cells contain that forms the basis of the imaging technique. See for sample images.

23 Magnetic Resonance Imaging (MRI)
27-oct-17 Brain & Language - Harry Howard - Tulane University Magnetic Resonance Imaging (MRI) All atoms spin on their axes. Nuclei have a positive electronic charge, and any spinning charged particle will act as a magnet with north and south poles located on the axis of spin. The spin-axes of the nuclei in the subject line up with the chamber’s field, with the north poles of the nuclei pointing in the ‘southward’ direction of the field. Then a radio pulse is broadcast toward the subject. The pulse causes the axes of the nuclei to tilt with respect to the chamber’s magnetic field, and as it wears off, the axes gradually return to their resting position (within the magnetic field). As they do so, each nucleus becomes a miniature radio transmitter, giving out a characteristic pulse that changes over time, depending on the microenvironment surrounding it. For example, hydrogen nuclei in fats have a different microenvironment than do those in water, and thus transmit different pulses. Due to such contrasts, different tissues transmit different radio signals. These radio transmissions can be coordinated by a computer into an image. This method is known as magnetic resonance imaging (MRI), and it can be used to scan the human body safely and accurately (Gregg 2002, based on Horowitz 1995)

24 functional Magnetic Resonance Imaging (fMRI)
27-oct-17 Brain & Language - Harry Howard - Tulane University functional Magnetic Resonance Imaging (fMRI) An elaboration of MRI called functional MRI (fMRI) has become the dominant technique for the study of the functional organization of the human brain during cognitive, perceptual, sensory, and motor tasks. As Gregg (2002) explains it, the principle of fMRI imaging is to take a series of images in quick succession and then to analyze them statistically for differences. For example, in the blood-oxygen-level dependent (BOLD) method introduced by Ogawa et al. (1990), the fact that hemoglobin and deoxyhemoglobin are magnetically different is exploited. Hemoglobin shows up better on MRI images than deoxyhemoglobin, which is to say that oxygenated blood shows up better then blood whose oxygen has been depleted by neural metabolism. This has been exploited in the following type of procedure: a series of baseline images are taken of the brain region of interest when the subject is at rest. The subject then performs a task, and a second series is taken. The first set of images is subtracted from the second, and the areas that are most visible in the resulting image are presumed to have been activated by the task.

25 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010), Fig. 2 Random effects (N = 16) of the univariate analysis for speech > band-passed noise (red) and sound > silence (blue; overlay purple) based on smoothed and normalized individual contrast maps, thresholded at p <  at the voxel level and a cluster extent of at least 30 voxels; displayed on an average of all 16 subjects’ T1-weighted images (normalized in MNI space).

26 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010), Fig. 3 Maps of correct classification for vowel and stop category. Display of significantly above-chance classification voxels from the leave-one-out across-subjects classification. Axial (top panels) and sagittal slices (bottom panels) are shown, arranged for left and right hemisphere view, and displayed on a standard T1 template brain. Note the sparse overlap in voxels significantly classifying vowel information (red) and stop information (blue; voxels allowing correct classification of both are marked in purple).

27 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University NEXT TIME More on auditory cortex

28 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University leftovers

29 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Altmanna (2014) Categorical speech perception during active discrimination of consonants and vowels This bar graph depicts the mean (n=15) response amplitudes from 430 to 500 ms after second stimulus onset within the left temporal cortex cluster as a region-of-interest.

30 Hickok & Poeppel (2004)’s model superimposed on the brain
27-oct-17 Brain & Language - Harry Howard - Tulane University Hickok & Poeppel (2004)’s model superimposed on the brain Dorsal Ventral

31 Brain & Language - Harry Howard - Tulane University
27-oct-17 Brain & Language - Harry Howard - Tulane University Old vs. new

32 Category boundary shifts
27-oct-17 Brain & Language - Harry Howard - Tulane University Category boundary shifts The shift in VOT is from ‘bin’ to ‘pin’: Thus the phonetic feature detectors must compensate for the context –– because they know how speech is produced? But Japanese quail do this too. student pointed out the phontactic constraint spin vs. *sbin

33 Tendencies of right-ear advantage by speech sound
27-oct-17 Brain & Language - Harry Howard - Tulane University Dichotic listening Tendencies of right-ear advantage by speech sound No advantage Weak right-ear advantage Strong right-ear advantage vowels liquids (l,r), glides (j,w), fricatives stops the acoustic cues for vowels do not depend on context the acoustic cues for consonants depend on context [see p. 116] > special machinery?

34 Divided visual-field (hemifield) tachistoscopy
27-oct-17 Brain & Language - Harry Howard - Tulane University Divided visual-field (hemifield) tachistoscopy A tachistoscope is a device that displays an image for a specific amount of time. It can be used to increase recognition speed, to show something too fast to be consciously recognized, or to test which elements of an image are memorable.


Download ppt "Auditory Cortex 2 Sept 27, 2017 – DAY 13"

Similar presentations


Ads by Google