Presentation is loading. Please wait.

Presentation is loading. Please wait.

Auditory Cortex 3 Sept 29, 2017 – DAY 14

Similar presentations


Presentation on theme: "Auditory Cortex 3 Sept 29, 2017 – DAY 14"— Presentation transcript:

1 Auditory Cortex 3 Sept 29, 2017 – DAY 14
Brain & Language LING NSCI Harry Howard Tulane University

2 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Course organization Fun with I am still working on grading.

3 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Review

4 It is the core that preserves tonotopy
29-oct-17 Brain & Language - Harry Howard - Tulane University It is the core that preserves tonotopy

5 Weak vs. strong versions of SMH
29-oct-17 Brain & Language - Harry Howard - Tulane University Weak vs. strong versions of SMH The speech mode hypothesis Strong: when we listen to speech, we engage perceptual mechanisms specialized for speech. Weak: when we listen to speech, we engage our knowledge of language. More recent: speech production can be engaged for 'hard' tasks, like dealing with bits of words like single syllables, eg VOT. for other, 'easier' tasks, it may not be necessary.

6 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Key acoustic cues

7 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University A clearer image

8 Dichotic listening to speech
29-oct-17 Brain & Language - Harry Howard - Tulane University Dichotic listening to speech Strong right-ear advantage Weak right-ear advantage No right-ear advantage stops (p,b,t,d,k,g) liquids (l,r), glides (y,w), fricatives (f,v,θ,ð,s,z,ʃ,ʒ) vowels short duration, fast change medium duration long duration

9 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010), Fig. 2 Random effects (N = 16) of the univariate analysis for speech > band-passed noise (red) and sound > silence (blue; overlay purple) based on smoothed and normalized individual contrast maps, thresholded at p <  at the voxel level and a cluster extent of at least 30 voxels; displayed on an average of all 16 subjects’ T1-weighted images (normalized in MNI space).

10 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010), Fig. 3 Maps of correct classification for vowel and stop category. Display of significantly above-chance classification voxels from the leave-one-out across-subjects classification. Axial (top panels) and sagittal slices (bottom panels) are shown, arranged for left and right hemisphere view, and displayed on a standard T1 template brain. Note the sparse overlap in voxels significantly classifying vowel information (red) and stop information (blue; voxels allowing correct classification of both are marked in purple).

11 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Auditory cortex 3

12 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010), fig. 2 (A) Exemplary single-subject activations in sound > silence contrast, thresholded at p < at the voxel-level, …. (B) Examples for single-subject vowel (red) and stop (blue) classification performance. Voxels in the respective participants shown (…) could correctly classify vowel or stop category above chance (only voxels with performance >50% shown) in a given participant's data, when trained on the 15 remaining participants.

13 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Obleser et al. (2010) (A) Overlay of classification results on Probability map of primary auditory cortex, using the labels of Morosan et al., 2001 (TE 1.2–TE 1.0). Notably, TE 1.2 appears relatively spared by voxels that allow significant across-subjects classification of vowel or consonant category. (B) Illustration of regions of interest used. A simple selection procedure was done, such as that voxels that fell within the probabilistic bounds of primary auditory cortex were defined following Rademacher et al., 2001 (lPAC, rPAC); voxels anterior to that (lANT, rANT) and posterior to that (lPOST, rPOST) were used to define regions, as well as voxels that were within the posterior–anterior bounds of PAC but more lateral (lMID, rMID).

14 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Lesions A lesion is a non-specific term referring to abnormal tissue in the body. It can be caused by any disease process including trauma (physical, chemical, electrical), infection, neoplasm, metabolic and autoimmune.

15 Brain & Language - Harry Howard - Tulane University
Review of prosody Prosody is the quality of spoken language that provides its melodic contour and rhythm, features which help the hearer to decode syntactic and lexical meaning as well as emotional content. Prosody differentiates, say, the neutral statement of fact “It’s my fault” from the sarcastic question-like rejoinder “It’s MY fault?”. Such distinctions are produced by variation in three parameters, which are borne in turn by three qualities of sound waves, respectively: sound waves prosody fundamental frequency pitch intensity stress timing duration Hypothesis? 29-oct-17 Brain & Language - Harry Howard - Tulane University

16 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Prosody and the RH It has been known since the 1970s that the right hemisphere dominates in the perception of prosody. Initial evidence thereof were the descriptions of lesions in the right hemisphere resulting in a pattern of aprosodias (deficits either in the expression or understanding of prosody) analogous to the well-documented pattern of left hemisphere lesions resulting in the various aphasias. With respect to production, the speech of patients with right hemisphere lesions has been characterized as monotonous and unmodulated. For instance, Ross & Mesulam (1979) report a patient who had difficulty disciplining her children because they could not detect when she was upset or angry. She eventually learned to emphasize her speech by adding "I mean it!" to the end of her sentences. With respect to perception, studies such as that of Tucker, Watson & Heilman (1977) asked people to identify semantically-neutral sentences that were intoned to convey happiness, sadness, anger, or indifference. Patients with RHD were impaired on both identification and discrimination of such affective meanings, in comparison to both healthy controls (NBD) and LHD. Aprosodia, see Ross (1981).

17 Kinds of linguistic prosody
29-oct-17 Brain & Language - Harry Howard - Tulane University Kinds of linguistic prosody Lexical and phrasal prosody, see next 2 slides. Sentence type and prosodic contour. Contrastive (or emphatic or focal) stress. Determining whether two sentences are identical based on any of these stress patterns.

18 Lexical prosody (CAPS mark stressed syllable)
29-oct-17 Brain & Language - Harry Howard - Tulane University Lexical prosody (CAPS mark stressed syllable) Noun vs. verb in English (±15) CONvert vs. conVERT Thai, a tone language naa with a rising pitch tone means “thick” naa with a falling pitch tone means “face” LHD (but not RHD) affects both of these rules

19 Stimuli for next experiment
29-oct-17 Brain & Language - Harry Howard - Tulane University Stimuli for next experiment

20 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Luo (2006) Opposite patterns of hemisphere dominance for early auditory processing of lexical tones and consonants We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger pre-attentive response, as revealed by whole-head electric recordings of the mismatch negativity [HH: an EEG evoked response], in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern.

21 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Kwok et al. (2016) Neural systems for auditory perception of lexical tones Fig. 1. Brain regions with significant activity of auditory perception of lexical tones. Fig. 1a: Axial sections; Fig. 1b: Lateral view. The significant threshold is P < uncorrected at voxel level, P < 0.05 FWE-corrected at cluster level. The functional maps (in color) are overlaid on the corresponding T1 images (in gray scale). Planes are axial sections, labeled with the height (mm) relative to the bicommisural line. L = the left hemisphere; R = the right hemisphere. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

22 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Phrasal prosody Compound noun rule noun phrase: hot DOG (a dog that is hot) adjective+noun: HOTdog (a frankfurter) noun+noun: SHEEPdog (a breed of dogs) Stress retraction After eating fourTEEN, CAKES did not tempt him. After eating FOURteen CAKES, he threw up. LHD (but not RHD) affects both of these rules

23 Contrastive (or emphatic or focal) stress [clausal prosody]
29-oct-17 Brain & Language - Harry Howard - Tulane University Contrastive (or emphatic or focal) stress [clausal prosody] Examples The horses were racing from the BARN. The HORSES were racing from the barn. LHD (but not RHD) affects this

24 Sentence type and prosodic contour
29-oct-17 Brain & Language - Harry Howard - Tulane University Sentence type and prosodic contour Types declarative: fall in pitch at end I eat chocolate. interrogative: rise for yes-no question (a); fall for interrogative pronoun (b) Do you eat chocolate? What do you eat? imperative: even pitch throughout; rise in intensity at end Eat chocolate! RHD (but not LHD) reduces accuracy and variation in pitch

25 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Summary LH (preserved in RHD) RH (preserved in LHD) lexical stress CONvert ~ conVERT tone languages phrasal stress noun compounding stress retraction clausal stress contrastive stress emotional prosody sentence type declarative, interrogative, imperative

26 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Cortical deafness Cortical deafness is an auditory disorder in which the patient is unable to hear sounds but has no apparent damage to the anatomy of the human ear, which can be thought of as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Cortical deafness is caused by bilateral cortical lesions in the primary auditory cortex located in the temporal lobes of the brain. Cortical deafness is extremely rare, with only twelve reported cases.

27 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University NEXT TIME P4 On to STG & STS

28 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University leftovers

29 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Altmanna (2014) Categorical speech perception during active discrimination of consonants and vowels This bar graph depicts the mean (n=15) response amplitudes from 430 to 500 ms after second stimulus onset within the left temporal cortex cluster as a region-of-interest.

30 Hickok & Poeppel (2004)’s model superimposed on the brain
29-oct-17 Brain & Language - Harry Howard - Tulane University Hickok & Poeppel (2004)’s model superimposed on the brain Dorsal Ventral

31 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Old vs. new

32 Category boundary shifts
29-oct-17 Brain & Language - Harry Howard - Tulane University Category boundary shifts The shift in VOT is from ‘bin’ to ‘pin’: Thus the phonetic feature detectors must compensate for the context –– because they know how speech is produced? But Japanese quail do this too. student pointed out the phontactic constraint spin vs. *sbin

33 Tendencies of right-ear advantage by speech sound
29-oct-17 Brain & Language - Harry Howard - Tulane University Dichotic listening Tendencies of right-ear advantage by speech sound No advantage Weak right-ear advantage Strong right-ear advantage vowels liquids (l,r), glides (j,w), fricatives stops the acoustic cues for vowels do not depend on context the acoustic cues for consonants depend on context [see p. 116] > special machinery?

34 Brain & Language - Harry Howard - Tulane University
29-oct-17 Brain & Language - Harry Howard - Tulane University Ethofer et al. (2006) Effects of prosodic emotional intensity on activation of associative auditory cortex Passive listening to adjectives and substantives with neutral word content, spoken in five different emotional intonations (happy, neutral, fearful, angry and alluring) All four emotional categories induced stronger responses within the right mid-STG than neutral stimuli  These responses were significantly correlated with several acoustic parameters (stimulus duration, mean intensity, mean pitch and pitch variability).


Download ppt "Auditory Cortex 3 Sept 29, 2017 – DAY 14"

Similar presentations


Ads by Google