Presentation is loading. Please wait.

Presentation is loading. Please wait.

Language and the brain Rajeev Raizada Dept. of Brain & Cognitive Sciences raizadalab.org.

Similar presentations


Presentation on theme: "Language and the brain Rajeev Raizada Dept. of Brain & Cognitive Sciences raizadalab.org."— Presentation transcript:

1

2 Language and the brain Rajeev Raizada Dept. of Brain & Cognitive Sciences rajeev.raizada@gmail.com raizadalab.org

3 Language and the brain: why bother with brain stuff in the first place? Key language areas, and lesion deficits Lots of interactive brain areas It’s not just a couple of areas on the left Interpreting brain activation: Who cares which bit of the brain lights up? We want brain imaging to tell us about linguistic information processing, or linguistic representations 2

4 Language areas in the brain  Some brain areas are specialised for language Broca's area: speech production Wernicke's area: speech perception On the left side of the brain (in 95% of people) This is pretty much the only left-brain / right-brain saying that is actually true  What does "specialised for language" actually mean? If you lose these areas, you lose language When you use language, you use those areas BUT: That does not mean that they only do language E.g. Broca's area may be involved in music perception

5 Broca's area: crucial for speech production  Tan's brain: lesion (injury) in left frontal cortex Paul Broca (1861): patient "Tan” Severe deficit in speech production: could only say “tan” Good language comprehension

6 Auditory cortex and Wernicke's area  Auditory cortex: all sounds pass into here Mostly specialised for low-level features, e.g. raw frequency Bilateral (on both left and right sides of the brain)  Wernicke's area (Carl Wernicke, 1874) Patient with very poor speech comprehension Good speech production Lesion on left side, just behind auditory cortex Specialised for processing “higher level” sounds: speech

7 Auditory cortex and Wernicke's area From http://www.physiology.wisc.edu/neuro524 /

8 Language areas in the brain From University of Washington's Digital Anatomist project

9 There's more to language in the brain than just Broca's and Wernicke's 8 Friederici, A. D. (2012). The cortical language circuit: from auditory perception to sentence comprehension. Trends in cognitive sciences, 16(5), 262-268.

10 The claim “language is on the left” is a total over-simplification 9 Specht, K. (2013). Mapping a lateralization gradient within the ventral stream for auditory speech perception. Frontiers in human neuroscience, 7.

11 The claim “language is on the left” is a total over-simplification 10 Peelle, J. E. (2012). The hemispheric lateralization of speech processing depends on what “speech” is: a hierarchical perspective. Frontiers in human neuroscience, 6.

12 Speech: Not what you might expect  What makes speech sound the way it does?  Sine-wave speech demo page:  http://www.lifesci.sussex.ac.uk/home/Chris_D arwin/SWS/ http://www.lifesci.sussex.ac.uk/home/Chris_D arwin/SWS/  How on earth can these weird whistles sound like spoken words? 11

13 Spectrograms and formants: Sound frequencies along time 12 McGettigan, C., & Scott, S. K. (2012). Cortical asymmetries in speech perception: what's wrong, what's right and what's left?. Trends in cognitive sciences, 16(5), 269-276. Formant = Peak in speech sound's frequency spectrum F0 = fundamental freq, F1 = 1st peak, F2 = 2nd peak, etc.

14 Spectrograms and formants: Sound frequencies along time 13 McGettigan, C., & Scott, S. K. (2012). Cortical asymmetries in speech perception: what's wrong, what's right and what's left?. Trends in cognitive sciences, 16(5), 269-276. Formant = Peak in speech sound's frequency spectrum F0 = fundamental freq, F1 = 1st peak, F2 = 2nd peak, etc.

15 Spectrograms and formants: Sound frequencies along time 14 McGettigan, C., & Scott, S. K. (2012). Cortical asymmetries in speech perception: what's wrong, what's right and what's left?. Trends in cognitive sciences, 16(5), 269-276. Formant = Peak in speech sound's frequency spectrum F0 = fundamental freq, F1 = 1st peak, F2 = 2nd peak, etc.

16 Place of articulation  McGettigan, C., & Scott, S. K. (2012). Cortical asymmetries in speech perception: what's wrong, what's right and what's left?. Trends in cognitive sciences, 16(5), 269-276. 15

17 Some example formant transitions  http://www2.psychology.uiowa.edu/faculty/mcmurray/speechglossary/ 16

18 Rajee v Raiza da - SFN talk, Nov 2002 Categorical perception: Not all acoustic differences make a difference as speech (Alvin Liberman & colleagues) Presented as pairs, constant 3-step distance apart acoustically 1-4 and 7-10 do not cross boundary: perceived as same 3-6, 4-7 do cross boundary, perceived as different Stimuli spread evenly along the /ba/-/da/ continuum Pure /ba/=1, pure /da/=10, perceptual boundary ~ 5

19 Rajee v Raiza da - SFN talk, Nov 2002 Categorical perception and same / different judgments  Acoustic difference within each pair is constant  (1-4, 4-7, 9-6 etc.)  Phonetic difference depends on whether category boundary is crossed

20 Rajee v Raiza da - SFN talk, Nov 2002 Spectrograms  /ba/ /da/

21 Vowels in formant space  http://www.phys.unsw.edu.au/jw/voice.html 20

22 21 Different languages carve up acoustic space differently: /ra/ and /la/ in English and Japanese speakers Raizada et al., Cerebral Cortex (2010)

23 Formants and perceptual discriminability Raizada et al., Cerebral Cortex (2010)

24 23 Some patterns are more separable than others, given a particular set of measurements Weight  Height Sumo wrestlers Basketball players Weight  Height Faculty Students Wanted: One single task, one set of stimuli BUT: Some people can do task, other people cannot

25 24 Prediction: fMRI patterns for /ra/ and /la/ are more separable in the brains of English speakers than in Japanese speakers /ra/ /la/ /ra/ /la/ /ra/ /la/ /ra/ /la/ English speakers: Can perceive /r/-l/ distinction fMRI patterns are separable Japanese speakers: Cannot perceive /r/-l/ distinction fMRI patterns are not separable

26 25 Will neural pattern separability match perceptual discriminability?  Predicted pattern-separability, if it matches perception:  English: F3-difference > F2-difference  Japanese: F3-difference = F2-difference  fMRI pattern separability contrast: F3-separability minus F2-separability  Is this neural difference greater for English than Japanese?  Key point: the classifier doesn't get told anything about people's behaviour, or about who is English or Japanese

27 26 Individual differences: Neural pattern separability predicts behavioural performance Correlation after partialling out effects of group membership: r = 0.389, p < 0.05 Raizada et al., Cerebral Cortex (2010)

28 Same speech sound, very different formant patterns  http://www2.psychology.uiowa.edu/faculty/mcmurray/speechglossary/ 27

29 How do we pull out the “invariant” sound?  An interesting (but misguided?) theory:  The motor theory of speech perception (Liberman et al.)  You don't actually pull out an invariant sound  You somehow perceive the motor articulation which made the sound 28

30 An analogy: The skull theory of face recognition  Google Images search for “Jon Stewart”

31 If motor theory is true, then motor-cortex damage should impair speech perception  But that turns out not to be the case  Stasenko, A., Garcea, F. E., & Mahon, B. Z. (2013). What happens to the motor theory of perception when the motor system is damaged?. Language and Cognition, 5(2-3), 225-238.  Nonetheless, motor cortex is often active during language tasks.  What is it doing? 30

32 Meaning 31

33 Embodied theory of meaning  Words are represented in terms of bodily sense modalities: vision, hearing, movement, etc. Barsalou, L. W. (2008). Grounded cognition. Annu. Rev. Psychol., 59, 617-645. Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470. Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in cognitive sciences, 15(11), 527-536. 32

34 Embodied theory of meaning Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in cognitive sciences, 15(11), 527-536. 33

35 Embodied theory of meaning Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470. 34

36 Example: somatotopic representation of motor words 35

37 Example: somatotopic representation of motor words 36  Pulvermüller, F., Trends in Cog Sci (2013).

38 Embodied-looking activation shows up even using corpus statistics  “Gustatory cortex” for celery in Mitchell et al. (2008)  Mouth / toungue areas 37

39 What about abstract words? 38  Pulvermüller, F., Trends in Cog Sci (2013).

40 Higher level “abstraction” areas? 39  Pulvermüller, F., Trends in Cog Sci (2013).

41 Decoding word meanings from the brain 40

42 Decoding as a test of whether we really understand anything about the encoding  How does the brain represent the meanings of words? Test: see if we can neurally decode the words 41

43 Video from CBS 60 Minutes  https://www.youtube.com/watch?v=8jc8URRxPIg 42

44 Semantic similarity: A quick demo to show that your brain cares about it  sour 43  Were any of the following words in that list?  softshortsweetsmooth  Deese-Roediger-McDermott effect   sugar  bitter  good  taste  tooth  nice  honey  soda  tart  pie  cake  heart  chocolate  candy  You will be presented with a list of words. Try to remember as many as you can.

45 People's attempts so far to neurally decode word-meanings  Important contribution: Tom Mitchell et al., Science, 2008 44  fMRI data and semantic features publicly available at http://www.cs.cmu.edu/~tom/science2008

46 Modeling a continuous space  Standard decoding:  Neural responses for Stim A  Neural responses for Stim B  Present some test-set neural data:  Q: Was it elicited by A or by B?  Problem:  What if it is a new stimulus, C? 45

47 Interpolating between stimuli, using a model of the stimulus space  Pattern-information analysis: from stimulus decoding to computational-model testing. Kriegeskorte N. Neuroimage. 2011 May 15;56(2):411-21. 46

48 Stimuli  Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of experimental psychology: Human learning and memory, 6(2), 174. 47

49 The 60 words used as stimuli 48

50 The Mitchell study: word stimuli and semantic features  Stimuli: concrete nouns E.g. hammer, shirt, dog, celery (60 words in all) 12 categories (tools, clothing, food, etc.), each with 5 words  Semantic features: action verbs E.g. push, move, taste, see (25 semantic features in all) Each noun has a 25-element semantic feature vector of its co-occurrence freqs with the verbs, from Google text- corpus hammer = 0.13*break + 0.93*touch + 0.01*eat + … celery = 0.00*break + 0.03*touch + 0.84*eat + … 49

51 Example co-occurrence features Features for cat: say said says (0.592), see sees (0.449), eat ate eats (0.435), run ran runs (0.303), hear hears heard (0.208), open opens opened (0.175), smell smells smelled (0.163), clean cleaned cleans (0.146), move moved moves (0.088), listen listens listened (0.075), touch touched touches (0.075) … http://www.cs.cmu.edu/~tom/science2008/semanticFeatureVectors.html 50

52 Mitchell et al. (2008) What do the semantic features look like in the brain? 51

53 Predicting brain activation 52

54 The Mitchell study: training and testing the model Training (carried out separately in each subject) Stage 1: represent each noun as weighted sum of semantic features Stage 2: learn a linear mapping between those semantic features onto the word-elicited fMRI patterns in each person's brain.  Testing: can the model predict neural activation for untrained words? 53

55 The Mitchell study: decoding results  “Leave two out” testing strategy Remove each pair of words in turn from 60-word set Train the model on the remaining 58 words Predict neural patterns for the two test-words Match those predictions against the test-words' actual elicited activations  Success rate for decoding left-out word-pairs: 77% Chance-level is 50%  Tried making a hand-crafted feature-set: 80.9% 54

56 Summary  The brain is astonishingly good at processing language Nobody understands how it achieves this But we do have some exciting leads  Lots of brain areas, all representing multiple types of information, all communicating with each other Not just Broca’s and Wernicke’s areas Not just in the left hemisphere  Challenges for neuroscience What information processing tricks does the brain use? What representations does it use, how does it use them? 55


Download ppt "Language and the brain Rajeev Raizada Dept. of Brain & Cognitive Sciences raizadalab.org."

Similar presentations


Ads by Google