Download presentation
Presentation is loading. Please wait.
Published byJulian Harper Modified over 8 years ago
1
fMRI and Behavioral Studies of Human Face Perception Ronnie Bryan Vision Lab 11.07.06
2
Outline fMRI studies of face perception and Multi-Voxel Pattern Analysis The role of attention in face perception
3
Part I: fMRI Study
4
Traditional fMRI Analyses Two conditions of interest Compare means (t-test) Spatial smoothing and statistical thresholding “Area X lit up” ==> “X is the Condition 1 minus Condition 2 area”
5
Face Perception in the Brain Kanwisher: An Extrastriate Module Gauthier: An Expertise Module Malach: Retinotopic Mapping Haxby: Distributed Patterns
6
Haxby 2001 Used correlation between brain states to predict category –Possible even when “modules” removed Multi-Voxel Pattern Analysis / Decoding –Using Machine Learning methods to classify cognitive states –Open issues: Voxel Selection, Linear?
8
Stepwise K-Nearest Neighbor Combine classification and voxel selection –Iterate through combinations of voxels to find subset that performs best on training data Use k Nearest Neighbors to classify –Local distances potentially more senstive to nonlinearities in data
10
Experiment Participants (N=13) viewed 7 object- level categories in blocks of 20 images –Shoes, Chairs, Houses, Dog faces, Monkey faces, Male faces, Female faces 8 time series of 192 whole brain EPI volumes –32 3mm axial slices, 64x64 matrix, TR=2s
11
Analysis Brain volumes were anatomically masked to restrict analysis to ventral temporal cortex (y=-70 to -20) –~3,500 voxels –Includes Parahippocampal gyrus, fusiform gyrus Each subject’s dataset was individually analyzed with 8 leave-one-out cross validiation cycles –Reported performance is average of these results
12
Results Stepwise KNN performed significantly better than other tested methods
13
Stepwise Addition of Voxels
14
Confusion Matrix
15
Decoding Gender None of the other methods tested were able to distinguish between male and female faces Averaged across subjects, stepwise KNN performed at 60.5% (p<0.05)
16
Similarity Structure Distances between object categories based on classifer confusions
17
Discussion / Future Work Room for improvement New experiment focusing on only human faces –What to classify? similarity, clusters, position along dimensions of variation
18
Part II: Behavioral Studies
19
Attention Are high-level visual features processed “pre-attentively” or do they require attention? –Controversy over face “pop-out” effect in visual search of faces among objects –Dual task paradigm for gender discrimination
20
Attentional Blink Viewing a rapid stream of images and searching for a target image, “attentional blink” prevents noticing subsequent targets if presented within a certain time window (~80-300 ms) To what extent are these neglected images processed?
21
Experiment Present a stream of 12 images for 80ms each Target, T1 = upside face Target, T2 = right-side up face Distractors = animal faces At end of trial press –1 if only saw T1 –2 if only saw T2 –3 if saw both
22
Conditions T1 onset = {3,4,5} x 80ms T2 lag = {2,3,4,5} x 80ms Experiment 1 –T2 = {neutral, familiar} Experiment 2 –T2 = {neutral, fearful}
23
Analysis and Results 2x3x4 random effects ANOVA –T2 misses: How often did subject report “Saw only T1” when both T1 and T2 were presented? Main effect of T2 type for both Experiment 1 and Experiment 2 –Fewer T2 misses for emotional and familiar faces –No effect of T1 onset time –Small effect of T2 lag
24
Binocular Rivalry When two different images are presented to each eye, one image will dominate the other Which image dominates is mostly random and alternates over time At what point in visual processing is the image suppressed?
25
Experiment Present two images: Face and House –Control which is suppressed by adding stimulus energy to house and removing it from face –Slowly increase face presentation until subject reports face appearing Image = A*Face + B*noise Increase A from 0 to 1 over 12 seconds
26
Conditions Experiment 1 –Face = {neutral, familiar} Experiment 2 –Face = {neutral, fearful}
27
Analysis and Results T-test of median reaction times between conditions (p<0.05) –Strong effect for familiar faces: t ~ 1s –Weaker effect for fearful faces: t ~ 0.5s
28
Discussion These studies suggest that the identity and the expression of facial stimuli are processed pre-attentively Future work: –New paradigms: visual search –New facial distinctions: gender, age, attractiveness, distinctiveness, gaze direction, etc.
29
Caltech Extraction of high-level features from facial images –Similarity: AOL project –Gender, age, attractiveness, trustworthiness, etc. Further decoding of these features from brain images –At what point in visual processing do the classifications become possible?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.