Presentation is loading. Please wait.

Presentation is loading. Please wait.

MVPA Tutorial Last Update: January 18, 2012 Last Course: Psychology 9223, W2010, University of Western Ontario Last Update:

Similar presentations


Presentation on theme: "MVPA Tutorial Last Update: January 18, 2012 Last Course: Psychology 9223, W2010, University of Western Ontario Last Update:"— Presentation transcript:

1 MVPA Tutorial http://www.fmri4newbies.com/ Last Update: January 18, 2012 Last Course: Psychology 9223, W2010, University of Western Ontario Last Update: March 10, 2013 Last Course: Psychology 9223, W2013, Western University Jody Culham Brain and Mind Institute Department of Psychology Western University

2 Test Data Set Two runs: A and B (same protocol) 5 trials per condition for 3 conditions

3 Measures of Activity β weights –z-normalized –%-transformed t-values –β/error % BOLD signal change –minus baseline low activity high activity low t low β z low β % high t high β z high β %

4 Step 1: Trial Estimation Just as in the Basic GLM, we are running one GLM per voxel Now however, each GLM is estimating activation not across a whole condition but for each instance (trial or block) of a condition

5 Three Predictors Per Instance 2-gammaconstantlinear within trial 5 instances of motor imagery 5 instances of mental calculation 5 instances of mental singing

6 Step 1: Trial Estimation Dialog

7 Step 1: Trial Estimation Output Now for each instance of each condition in each run, for each voxel we have an estimate of activation

8 Step 2: Support Vector Machine SVMs are usually run in a subregion of the brain –e.g., a region of interest (= volume of interest) sample data: SMA ROI sample data: 3 Tasks ROI

9 Step 2: Support Vector Machine test data must be independent of training data –leave-one-run-out –leave-one-trial-out –leave-one-trial-set-out often we will run a series of iterations to test multiple combinations of leave-X-out –e.g., with two runs, we can run two iterations of leave-one-run-out –e.g., with 10 trials per condition and 3 conditions, we could run up to 10 3 = 1000 iterations of leave-one-trial-set-out

10 MVP file plots 98 functional voxels 15 trials Run A = training set Run B = test set intensity = activation

11 SVM Output: Train Run A; Test Run B Guessed Condition Actual Condition Guessed Condition Actual Condition                               15/15 correct 10/15 correct (chance = 5/15)

12 SVM Output: Train Run B; Test Run A

13 Permutation Testing randomize all the condition labels run SVMs on the randomized data repeat this many times (e.g., 1000X) get a distribution of expected decoding accuracy test the null hypothesis (H 0 ) that the decoding accuracy you found came from this permuted distribution

14 Output from Permutation Testing median of permuted distribution (should be 33.3%) upper quartile of permuted distribution lower quartile of permuted distribution upper bound of 95% confidence limits on permuted distribution our data  reject H 0

15 Voxel Weight Maps voxels with high weights contribute strongly to the classification of a trial to a given condition


Download ppt "MVPA Tutorial Last Update: January 18, 2012 Last Course: Psychology 9223, W2010, University of Western Ontario Last Update:"

Similar presentations


Ads by Google