Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental.

Slides:



Advertisements
Similar presentations
FIL SPM Course 2010 Spatial Preprocessing
Advertisements

VBM Susie Henley and Stefan Klöppel Based on slides by John Ashburner
A Growing Trend Larger and more complex models are being produced to explain brain imaging data. Bigger and better computers allow more powerful models.
SPM5 Segmentation. A Growing Trend Larger and more complex models are being produced to explain brain imaging data. Bigger and better computers allow.
Unwarping.
VBM Voxel-based morphometry
Overview fMRI time-series Statistical Parametric Map Smoothing kernel
Realignment – Motion Correction (gif from FMRIB at Oxford)
Concepts of SPM data analysis Marieke Schölvinck.
Computational Neuroanatomy John Ashburner SmoothingSmoothing Motion CorrectionMotion Correction Between Modality Co-registrationBetween.
Gordon Wright & Marie de Guzman 15 December 2010 Co-registration & Spatial Normalisation.
A Cortical Area Selective for Visual Processing of the Human Body by P.E. Downing, Y. Jiang, M. Shuman, N.Kanwisher presented by Ilya Levner.
Methods for Dummies Preprocessing: Realigning and Unwarping Rashmi Gupta and Luke Palmer.
Unwarping Irma Kurniawan MFD Realignment (within-modality) Realign + Unwarping (EPI with fieldmaps) 2. Between-modality Coregistration Coreg.
Co-registration and Spatial Normalisation Nazanin Derakshan Eddy Davelaar School of Psychology, Birkbeck University of London.
Introduction to Functional and Anatomical Brain MRI Research Dr. Henk Cremers Dr. Sarah Keedy 1.
Basics of fMRI Preprocessing Douglas N. Greve
OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear.
SPM5 Tutorial, Part 1 fMRI preprocessing Tiffany Elliott May
Coregistration and Normalisation By Lieke de Boer & Julie Guerin.
Preprocessing: Coregistration and Spatial Normalisation Cassy Fiford and Demis Kia Methods for Dummies 2014 With thanks to Gabriel Ziegler.
SPM5- Methods for Dummies 2007 P. Schwingenschuh
Pre-processing in fMRI: Realigning and unwarping
Spatial Preprocessing
Realigning and Unwarping MfD
Realigning and Unwarping MfD
JOAQUÍN NAVAJAS SARAH BUCK 2014 fMRI data pre-processing Methods for Dummies Realigning and unwarping.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Zurich SPM Course 2011 Spatial Preprocessing Ged Ridgway With thanks to John Ashburner and the FIL Methods Group.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Bayesian models for fMRI data Methods & models for fMRI data analysis 06 May 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Spatial preprocessing of fMRI data Methods & models for fMRI data analysis 25 February 2009 Klaas Enno Stephan Laboratory for Social and Neural Systrems.
Spatial preprocessing of fMRI data
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
SPM+fMRI. K space K Space Mechanism of BOLD Functional MRI Brain activity Oxygen consumptionCerebral blood flow Oxyhemoglobin Deoxyhemoglobin Magnetic.
Preprocessing II: Between Subjects John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK.
Voxel-Based Morphometry John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK.
FMRI Preprocessing John Ashburner. Contents *Preliminaries *Rigid-Body and Affine Transformations *Optimisation and Objective Functions *Transformations.
FIL SPM Course May 2011 Spatial preprocessing Ged Ridgway With thanks to John Ashburner and the FIL Methods Group.
Voxel Based Morphometry
Co-registration and Spatial Normalisation
SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006.
Preprocessing of FMRI Data fMRI Graduate Course October 23, 2002.
Anatomical Measures John Ashburner zSegmentation zMorphometry zSegmentation zMorphometry.
Coregistration and Spatial Normalisation
Coregistration and Spatial Normalization Jan 11th
Medical Image Analysis Image Registration Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
Spatial Preprocessing Ged Ridgway, FMRIB/FIL With thanks to John Ashburner and the FIL Methods Group.
Voxel-based morphometry The methods and the interpretation (SPM based) Harma Meffert Methodology meeting 14 april 2009.
Statistical Parametric Mapping Lecture 11 - Chapter 13 Head motion and correction Textbook: Functional MRI an introduction to methods, Peter Jezzard, Paul.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Volumetric Intersubject Registration John Ashburner Wellcome Department of Imaging Neuroscience, 12 Queen Square, London, UK.
Image Registration John Ashburner
SPM Pre-Processing Oli Gearing + Jack Kelly Methods for Dummies
Preprocessing: Realigning and Unwarping Methods for Dummies, 2015/16 Sujatha Krishnan-Barman Filip Gesiarz.
MfD Co-registration and Normalisation in SPM
Spatial processing of FMRI data And why you may care.
Statistical Parametric
Zurich SPM Course 2012 Spatial Preprocessing
Spatial Preprocessing John Ashburner
fMRI Preprocessing John Ashburner
Zurich SPM Course 2011 Spatial Preprocessing
Computational Neuroanatomy for Dummies
Image Preprocessing for Idiots
Spatial Preprocessing
Image Registration John Ashburner
SPM2: Modelling and Inference
Preprocessing: Coregistration and Spatial Normalisation
Anatomical Measures John Ashburner
Mixture Models with Adaptive Spatial Priors
Presentation transcript:

Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental Design Statistical Parametric Mapping (SPM) 1. Talk I: Spatial Pre-processing 2. Talk II: General Linear Model 3. Talk III:Statistical Inference 3. Talk IV: Experimental Design

Spatial Preprocessing & Computational Neuroanatomy With thanks to: John Ashburner, Jesper Andersson Spatial Preprocessing & Computational Neuroanatomy With thanks to: John Ashburner, Jesper Andersson

OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear Model Design matrix Parameter Estimates

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Reasons for Motion Correction Subjects will always move in the scannerSubjects will always move in the scanner The sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivityThe sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivity However, subject movement may also correlate with the task…However, subject movement may also correlate with the task… …in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion)…in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion) Subjects will always move in the scannerSubjects will always move in the scanner The sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivityThe sensitivity of the analysis depends on the residual noise in the image series, so movement that is unrelated to the subject’s task will add to this noise and hence realignment will increase the sensitivity However, subject movement may also correlate with the task…However, subject movement may also correlate with the task… …in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion)…in which case realignment may reduce sensitivity (and it may not be possible to discount artefacts that owe to motion) Realignment (of same-modality images from same subject) involves two stages:Realignment (of same-modality images from same subject) involves two stages: –1. Registration - determining the 6 parameters that describe the rigid body transformation between each image and a reference image –2. Transformation (reslicing) - re-sampling each image according to the determined transformation parameters Realignment (of same-modality images from same subject) involves two stages:Realignment (of same-modality images from same subject) involves two stages: –1. Registration - determining the 6 parameters that describe the rigid body transformation between each image and a reference image –2. Transformation (reslicing) - re-sampling each image according to the determined transformation parameters

1. Registration Determine the rigid body transformation that minimises the sum of squared difference between imagesDetermine the rigid body transformation that minimises the sum of squared difference between images Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions –3 rotations - about X, Y & Z axes Operations can be represented as affine transformation matrices:Operations can be represented as affine transformation matrices: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 Determine the rigid body transformation that minimises the sum of squared difference between imagesDetermine the rigid body transformation that minimises the sum of squared difference between images Rigid body transformation is defined by:Rigid body transformation is defined by: –3 translations - in X, Y & Z directions –3 rotations - about X, Y & Z axes Operations can be represented as affine transformation matrices:Operations can be represented as affine transformation matrices: x 1 = m 1,1 x 0 + m 1,2 y 0 + m 1,3 z 0 + m 1,4 y 1 = m 2,1 x 0 + m 2,2 y 0 + m 2,3 z 0 + m 2,4 z 1 = m 3,1 x 0 + m 3,2 y 0 + m 3,3 z 0 + m 3,4 TranslationsPitchRollYaw Rigid body transformations parameterised by: Squared Error

1. Registration Iterative procedure (Gauss- Newton ascent)Iterative procedure (Gauss- Newton ascent) Additional scaling parameterAdditional scaling parameter Nx6 matrix of realignment parameters written to file (N is number of scans)Nx6 matrix of realignment parameters written to file (N is number of scans) Orientation matrices in *.mat file updated for each volume (do not have to be resliced)Orientation matrices in *.mat file updated for each volume (do not have to be resliced) Slice-timing correction can be performed before or after realignment (depending on acquisition)Slice-timing correction can be performed before or after realignment (depending on acquisition) Iterative procedure (Gauss- Newton ascent)Iterative procedure (Gauss- Newton ascent) Additional scaling parameterAdditional scaling parameter Nx6 matrix of realignment parameters written to file (N is number of scans)Nx6 matrix of realignment parameters written to file (N is number of scans) Orientation matrices in *.mat file updated for each volume (do not have to be resliced)Orientation matrices in *.mat file updated for each volume (do not have to be resliced) Slice-timing correction can be performed before or after realignment (depending on acquisition)Slice-timing correction can be performed before or after realignment (depending on acquisition)

Application of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxelsApplication of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxels Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines”Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines” Application of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxelsApplication of registration parameters involves re-sampling the image to create new voxels by interpolation from existing voxels Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines”Interpolation can be nearest neighbour (0-order), tri-linear (1st-order), (windowed) fourier/sinc, or in SPM2, nth-order “b-splines” 2. Transformation (reslicing) Nearest Neighbour Linear Full sinc (no alias) Windowed sinc

Interpolation errors, especially with tri-linear interpolation and small-window sincInterpolation errors, especially with tri-linear interpolation and small-window sinc PET:PET: –Incorrect attenuation correction because scans are no longer aligned with transmission scan (a transmission scan is often acquired to give a map of local positron attenuation) fMRI (EPI):fMRI (EPI): –Ghosts (and other artefacts) in the image (which do not move as a rigid body) –Rapid movements within a scan (which cause non-rigid image deformation) –Spin excitation history effects (residual magnetisation effects of previous scans) –Interaction between movement and local field inhomogeniety, giving non-rigid distortion Interpolation errors, especially with tri-linear interpolation and small-window sincInterpolation errors, especially with tri-linear interpolation and small-window sinc PET:PET: –Incorrect attenuation correction because scans are no longer aligned with transmission scan (a transmission scan is often acquired to give a map of local positron attenuation) fMRI (EPI):fMRI (EPI): –Ghosts (and other artefacts) in the image (which do not move as a rigid body) –Rapid movements within a scan (which cause non-rigid image deformation) –Spin excitation history effects (residual magnetisation effects of previous scans) –Interaction between movement and local field inhomogeniety, giving non-rigid distortion Residual Errors after Realignment

Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction)Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction) They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox)They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox) (Note that susceptibility artifacts that cause drop-out are more difficult to correct)(Note that susceptibility artifacts that cause drop-out are more difficult to correct) However, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in fieldHowever, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in field This movement-by-distortion can be accommodated during realignment using “unwarp”This movement-by-distortion can be accommodated during realignment using “unwarp” Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction)Echo-planar images (EPI) contain distortions owing to field inhomogenieties (susceptibility artifacts, particularly in phase-encoding direction) They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox)They can be “undistorted” by use of a field-map (available in the “FieldMap” SPM toolbox) (Note that susceptibility artifacts that cause drop-out are more difficult to correct)(Note that susceptibility artifacts that cause drop-out are more difficult to correct) However, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in fieldHowever, movement interacts with the field inhomogeniety (presence of object affects B 0 ), ie distortions change with position of object in field This movement-by-distortion can be accommodated during realignment using “unwarp”This movement-by-distortion can be accommodated during realignment using “unwarp” UnwarpUnwarp New in SPM2 Distorted image Corrected imageField-map

One could include the movement parameters as confounds in the statistical model of activationsOne could include the movement parameters as confounds in the statistical model of activations However, this may remove activations of interest if they are correlated with the movementHowever, this may remove activations of interest if they are correlated with the movement Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)…Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)… … using Taylor expansion (about mean realigned image):… using Taylor expansion (about mean realigned image): Iterate: 1) estimate movement parameters (), 2) estimate deformation fields, 1) re-estimate movement …Iterate: 1) estimate movement parameters ( ,  ), 2) estimate deformation fields, 1) re-estimate movement … Fields expressed by spatial basis functions (3D discrete cosine set)…Fields expressed by spatial basis functions (3D discrete cosine set)… One could include the movement parameters as confounds in the statistical model of activationsOne could include the movement parameters as confounds in the statistical model of activations However, this may remove activations of interest if they are correlated with the movementHowever, this may remove activations of interest if they are correlated with the movement Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)…Better is to incorporate physics knowledge, eg to model how field changes as function of pitch and roll (assuming phase-encoding is in y-direction)… … using Taylor expansion (about mean realigned image):… using Taylor expansion (about mean realigned image): Iterate: 1) estimate movement parameters (), 2) estimate deformation fields, 1) re-estimate movement …Iterate: 1) estimate movement parameters ( ,  ), 2) estimate deformation fields, 1) re-estimate movement … Fields expressed by spatial basis functions (3D discrete cosine set)…Fields expressed by spatial basis functions (3D discrete cosine set)… UnwarpUnwarp New in SPM2 Roll Pitch Estimated derivative fields  +  B0 B0 

UnwarpUnwarp  B 0 {i} B0B0 = +  +  + error (0 th -order term can be determined from fieldmap) - f1f1 fifi  11 +2+  ii  ii  ii  New in SPM2

UnwarpUnwarp New in SPM2 Example: Movement correlated with design t max =13.38 No correction t max =5.06 Correction by covariation t max =9.57 Correction by Unwarp

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Reasons for Normalisation Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase statistical power above that obtained from single subject Reporting of activations as co-ordinates within a standard stereotactic spaceReporting of activations as co-ordinates within a standard stereotactic space –e.g. the space described by Talairach & Tournoux Inter-subject averagingInter-subject averaging –extrapolate findings to the population as a whole –increase statistical power above that obtained from single subject Reporting of activations as co-ordinates within a standard stereotactic spaceReporting of activations as co-ordinates within a standard stereotactic space –e.g. the space described by Talairach & Tournoux Label-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are alignedLabel-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are aligned –but few readily identifiable landmarks (and manually defined?) Intensity-based approaches: Warp to images to maximise some voxel-wise similarity measureIntensity-based approaches: Warp to images to maximise some voxel-wise similarity measure –eg, squared error, assuming intensity correspondence (within-modality) Normalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothingNormalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothing Label-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are alignedLabel-based approaches: Warp the images such that defined landmarks (points/lines/surfaces) are aligned –but few readily identifiable landmarks (and manually defined?) Intensity-based approaches: Warp to images to maximise some voxel-wise similarity measureIntensity-based approaches: Warp to images to maximise some voxel-wise similarity measure –eg, squared error, assuming intensity correspondence (within-modality) Normalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothingNormalisation constrained to correct for only gross differences; residual variabilility accommodated by subsequent spatial smoothing

SummarySummary Spatial Normalisation Original image Template image Spatially normalised Deformation field Determine transformation that minimises the sum of squared difference between an image and a (combination of) template image(s) Two stages: 1. affine registration to match size and position of the images 2. non-linear warping to match the overall brain shape Uses a Bayesian framework to constrain affine and warps Determine transformation that minimises the sum of squared difference between an image and a (combination of) template image(s) Two stages: 1. affine registration to match size and position of the images 2. non-linear warping to match the overall brain shape Uses a Bayesian framework to constrain affine and warps

Stage 1. Full Affine Transformation The first part of normalisation is a 12 parameter affine transformationThe first part of normalisation is a 12 parameter affine transformation –3 translations –3 rotations –3 zooms –3 shears Better if template image in same modality (eg because of image distortions in EPI but not T1)Better if template image in same modality (eg because of image distortions in EPI but not T1) The first part of normalisation is a 12 parameter affine transformationThe first part of normalisation is a 12 parameter affine transformation –3 translations –3 rotations –3 zooms –3 shears Better if template image in same modality (eg because of image distortions in EPI but not T1)Better if template image in same modality (eg because of image distortions in EPI but not T1) Rotation TranslationZoom Shear Rigid body

Six affine registered images Six affine + nonlinear registered Insufficieny of Affine-only normalisation

Stage 2. Nonlinear Warps Stage 2. Nonlinear Warps Deformations consist of a linear combination of smooth basis imagesDeformations consist of a linear combination of smooth basis images These are the lowest frequency basis images of a 3-D discrete cosine transformThese are the lowest frequency basis images of a 3-D discrete cosine transform Brain masks can be applied (eg for lesions)Brain masks can be applied (eg for lesions) Deformations consist of a linear combination of smooth basis imagesDeformations consist of a linear combination of smooth basis images These are the lowest frequency basis images of a 3-D discrete cosine transformThese are the lowest frequency basis images of a 3-D discrete cosine transform Brain masks can be applied (eg for lesions)Brain masks can be applied (eg for lesions)

Affine Registration (  2 = 472.1) Affine Registration (  2 = 472.1) Template image Template image Non-linear registration without regularisation (  2 = 287.3) Non-linear registration without regularisation (  2 = 287.3) Non-linear registration with regularisation (  2 = 302.7) Non-linear registration with regularisation (  2 = 302.7) Without the Bayesian formulation, the non-linear spatial normalisation can introduce unnecessary warping into the spatially normalised images Bayesian Constraints

Using Bayes rule, we can constrain (“regularise”) the nonlinear fit by incorporating prior knowledge of the likely extent of deformations: p(p|e)  p(e|p) p(p) (Bayes Rule) p(p|e) is the a posteriori probability of parameters p given errors e p(e|p) is the likelihood of observing errors e given parameters p p(p) is the a priori probability of parameters p For Maximum a posteriori (MAP) estimate, we minimise (taking logs): H(p|e)  H(e|p) + H(p) (Gibbs potential) H(e|p) (-log p(e|p)) is the squared difference between the images (error) H(p)  -log p(p)) constrains parameters (penalises unlikely deformations) is “regularisation” hyperparameter, weighting effect of “priors” Bayesian Constraints

Algorithm simultaneously minimises:Algorithm simultaneously minimises: –Sum of squared difference between template and object –Squared distance between the parameters and their expectation Bayesian constraints applied to both:Bayesian constraints applied to both: 1) affine transformations 1) affine transformations –based on empirical prior ranges 2) nonlinear deformations 2) nonlinear deformations –based on smoothness constraint (minimising membrane energy) Algorithm simultaneously minimises:Algorithm simultaneously minimises: –Sum of squared difference between template and object –Squared distance between the parameters and their expectation Bayesian constraints applied to both:Bayesian constraints applied to both: 1) affine transformations 1) affine transformations –based on empirical prior ranges 2) nonlinear deformations 2) nonlinear deformations –based on smoothness constraint (minimising membrane energy) Empirically generated priors Bayesian Constraints

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Reasons for Smoothing Potentially increase signal to noise (matched filter theorem)Potentially increase signal to noise (matched filter theorem) Inter-subject averaging(allowing for residual differences after normalisation)Inter-subject averaging(allowing for residual differences after normalisation) Increase validity of statistics (more likely that errors distributed normally)Increase validity of statistics (more likely that errors distributed normally) Potentially increase signal to noise (matched filter theorem)Potentially increase signal to noise (matched filter theorem) Inter-subject averaging(allowing for residual differences after normalisation)Inter-subject averaging(allowing for residual differences after normalisation) Increase validity of statistics (more likely that errors distributed normally)Increase validity of statistics (more likely that errors distributed normally) Gaussian smoothing kernel Kernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a GaussianKernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a Gaussian Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements)Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements) Kernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a GaussianKernel defined in terms of FWHM (full width at half maximum) of filter - usually ~16-20mm (PET) or ~6-8mm (fMRI) of a Gaussian Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements)Ultimate smoothness is function of applied smoothing and intrinsic image smoothness (sometimes expressed as “resels” - RESolvable Elements) FWHM

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Between Modality Co-registration Because different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between imagesBecause different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between images Two main approaches:Two main approaches: I. Via Templates: I. Via Templates: 1) Simultaneous affine registrations between each image and same-modality template 2) Segmentation into grey and white matter 3) Final simultaneous registration of segments II. Mutual Information II. Mutual Information Because different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between imagesBecause different modality images have different properties (e.g., relative intensity of gray/white matter), cannot simply minimise difference between images Two main approaches:Two main approaches: I. Via Templates: I. Via Templates: 1) Simultaneous affine registrations between each image and same-modality template 2) Segmentation into grey and white matter 3) Final simultaneous registration of segments II. Mutual Information II. Mutual Information EPI T2 T1Transm PDPET Useful, for example, to display functional results (EPI) onto high resolution anatomical image (T1)Useful, for example, to display functional results (EPI) onto high resolution anatomical image (T1)

3. Registration of Partitions 1. Affine Registrations Both images are registered - using 12 parameter affine transformations - to their corresponding templates...Both images are registered - using 12 parameter affine transformations - to their corresponding templates... … but only the rigid-body transformation parameters allowed to differ between the two registrations… but only the rigid-body transformation parameters allowed to differ between the two registrations This gives:This gives: –rigid body mapping between the images –affine mappings between the images and the templates Both images are registered - using 12 parameter affine transformations - to their corresponding templates...Both images are registered - using 12 parameter affine transformations - to their corresponding templates... … but only the rigid-body transformation parameters allowed to differ between the two registrations… but only the rigid-body transformation parameters allowed to differ between the two registrations This gives:This gives: –rigid body mapping between the images –affine mappings between the images and the templates 2. Segmentation ‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF Additional information is obtained from a priori probability images - see laterAdditional information is obtained from a priori probability images - see later ‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF‘Mixture Model’ cluster analysis to classify MR image as GM, WM & CSF Additional information is obtained from a priori probability images - see laterAdditional information is obtained from a priori probability images - see later Between Modality Co-registration: I. Via Templates Grey and white matter partitions are registered using a rigid body transformationGrey and white matter partitions are registered using a rigid body transformation Simultaneously minimise sum of squared differenceSimultaneously minimise sum of squared difference Grey and white matter partitions are registered using a rigid body transformationGrey and white matter partitions are registered using a rigid body transformation Simultaneously minimise sum of squared differenceSimultaneously minimise sum of squared difference

Between Modality Coregistration: II. Mutual Information Between Modality Coregistration: II. Mutual Information PETT1 MRI Another way is to maximise the Mutual Information in the 2D histogram (plot of one image against other) For histograms normalised to integrate to unity, the Mutual Information is:  i  j h ij log h ij  k h ik  l h lj  k h ik  l h lj Another way is to maximise the Mutual Information in the 2D histogram (plot of one image against other) For histograms normalised to integrate to unity, the Mutual Information is:  i  j h ij log h ij  k h ik  l h lj  k h ik  l h lj New in SPM2

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Image Segmentation Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF)Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) ‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution Further Bayesian constraints from prior probability images, which are overlaidFurther Bayesian constraints from prior probability images, which are overlaid Additional correction for intensity inhomogeniety possibleAdditional correction for intensity inhomogeniety possible Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF)Partition into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) ‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution‘Mixture Model’ cluster analysis used, which assumes each voxel is one of a number of distinct tissue types (clusters), each with a (multivariate) normal distribution Further Bayesian constraints from prior probability images, which are overlaidFurther Bayesian constraints from prior probability images, which are overlaid Additional correction for intensity inhomogeniety possibleAdditional correction for intensity inhomogeniety possible. Intensity histogram fit by multi-Gaussians Priors: Image: Brain/skullCSFWMGM

1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) 1. Realignment (motion correction) 2. Normalisation (to stereotactic space) 3. Smoothing 4. Between-modality Coregistration 5. Segmentation (to gray/white/CSF) 6. Morphometry (VBM/DBM/TBM) OverviewOverview

Morphometry (Computational Neuroanatomy) Voxel-by-voxel: where are the differences between populations?Voxel-by-voxel: where are the differences between populations? –Univariate: e.g, Voxel-Based Morphometry (VBM) –Multivariate: e.g, Tensor-Based Morphometry (TBM) Volume-based: is there a difference between populations?Volume-based: is there a difference between populations? –Multivariate: e.g, Deformation- Based Morphometry (DBM) Continuum:Continuum: –perfect normalisation => all information in Deformation field (no VBM differences) –no normalisation => all in VBM Voxel-by-voxel: where are the differences between populations?Voxel-by-voxel: where are the differences between populations? –Univariate: e.g, Voxel-Based Morphometry (VBM) –Multivariate: e.g, Tensor-Based Morphometry (TBM) Volume-based: is there a difference between populations?Volume-based: is there a difference between populations? –Multivariate: e.g, Deformation- Based Morphometry (DBM) Continuum:Continuum: –perfect normalisation => all information in Deformation field (no VBM differences) –no normalisation => all in VBM Spatial Normalisation OriginalTemplate Normalised Deformation field VBM TBM DBM

Original image Spatially normalised Segmented grey matter Smoothed “Optimised” VBM involves segmenting images before normalising, so as to normalise gray matter / white matter / CSF separately... A voxel by voxel statistical analysis is used to detect regional differences in the amount of grey matter between populations Voxel-Based Morphometry (VBM) SPM

Affine registration Apply deformation Segmentation & Extraction Affine transform Segmentation & extraction Spatial normalisation priors Modulation smooth STATS volume STATS concentration template Normalised T1 T1 image Optimised VBM

Grey matter volume loss with age superior parietal pre and post central insula cingulate/parafalcine VBM Examples: Aging

Males > FemalesFemales > Males L superior temporal sulcus R middle temporal gyrus intraparietal sulci mesial temporal temporal pole anterior cerebellar VBM Examples: Sex Differences

Right frontal and left occipital petalia VBM Examples: Brain Asymmetries

Deformation-based Morphometry looks at absolute displacements Tensor-based Morphometry looks at local shapes Morphometry on deformation fields: DBM/TBM Vector fieldTensor field

Deformation fields... Parameter reduction using principal component analysis (SVD) Multivariate analysis of covariance used to identify differences between groups Canonical correlation analysis used to characterise differences between groups Remove positional and size information - leave shape Deformation-based Morphometry (DBM)

Non-linear warps of sex differences characterised by canonical variates analysis Mean differences (mapping from an average female to male brain) DBM Example: Sex Differences

If the original Jacobian matrix is donated by A, then this can be decomposed into: A = RU, where R is an orthonormal rotation matrix, and U is a symmetric matrix containing only zooms and shears. TemplateWarpedOriginal Strain tensors are defined that model the amount of distortion. If there is no strain, then tensors are all zero. Generically, the family of Lagrangean strain tensors are given by: (U m -I)/m when m~=0, and log(U) if m==0. Relative volumes Strain tensor Tensor-based morphometry

References Friston et al (1995): Spatial registration and normalisation of images. Human Brain Mapping 3(3): Ashburner & Friston (1997): Multimodal image coregistration and partitioning - a unified framework. NeuroImage 6(3): Collignon et al (1995): Automated multi-modality image registration based on information theory. IPMI’95 pp Ashburner et al (1997): Incorporating prior knowledge into image registration. NeuroImage 6(4): Ashburner et al (1999): Nonlinear spatial normalisation using basis functions. Human Brain Mapping 7(4): Ashburner & Friston (2000): Voxel-based morphometry - the methods. NeuroImage 11: