Download presentation
Presentation is loading. Please wait.
Published byMay Parks Modified over 9 years ago
1
All Hands Meeting 2005 FBIRN AHM 2006 Statistics Working Group Update Greg Brown, UCSD Hal Stern, UCI
2
Statistics Update Discussion Points Aims of Statistics Workgroup Activities last 6 months (highlights only) Future Plans
3
Aims Aim 1. Refine tools to assess the quality and reliability of fMRI data, and apply these tools to guide the collection and analysis of multi- site imaging data Aim 2. Develop statistical methods to analyze multi-site fMRI data, while accounting for between site variation Aim 3: Develop statistical and machine learning tools to identify homogenous subgroups
4
Statistics Workgroup Structure
5
Activities over the past 6 months organized by Statistics WG subcommittees
6
Data Processing Statistics WG Developed download scripts at several sites Continual download script running at San Diego site Field maps for GE sites need special file structure to upload Down load time varies by download site and by down load software options
7
Data Processing Statistics WG Preprocessing Scripts (stand alone modules and integration with FIPS are available for most scripts) All scripts run on Analyze 7.5 format Some scripts run on AFNI BRIK format also. Scripts available for Slice time correction Motion correction B 0 inhomogeniety warping Spatial smoothing to a target smoothness (several approaches are available) Scripts are in place for Siemen’s sites and scripts for GE sites are being integrated into fBIRN stream These scripts have been run on all auditory oddball images from Minnesota, MGH, and New Mexico
8
Data Processing Statistics WG Several QA tools have been tested: Duke tools, GabLab tools, AIRT, AFNI tools Goal is to develop automated or semi-automated QA tools usable with large image datasets Validation of these tools will require visual inspection Migrating Functional Image Processing System (FIPS) throughout the fBIRN consortium. Five fBIRN sites are currently using FIPS to test the processing of images at their site FIPS Power Users are being trained at several sites. These power uses are meant to be a regional fBIRN resource as well as a local resource. They will relieve the FIPS developers from day to day consultations about FIPS
9
Data Processing Statistics WG Processing strategy to test between-group hypotheses involving auditory oddball and Sternberg Memory Scanning paradigms. One site will be the lead site for this analysis so that the fBIRN community presents a uniform report of results to the general imaging community UCSD has volunteered to be the lead site for between group hypotheses Other sites have volunteered to analyze images from their site UCI University of Minnesota The lead site will re-analyze a subset of images from volunteer sites to insure uniformity of analysis results.
10
Reliability and Calibration WG What is the outer limit of reliability of robustly activating paradigms in multi-site fMRI studies? How reliably did the Phase I traveling subjects study measure site variation? How much unwanted variance can be reduced in multisite sensorimotor imaging data by breath hold calibration once between site differences in image intensity are controlled?
11
What is the outer limit of reliability of robustly activating paradigms in multi-site fMRI studies?
12
Outer limit of reliability: Sensorimotor Task and Breath Hold Tasks FIPS Signed Magnitude Top 10% AFNI % Signal Change from Average Image Value Mean Across ROI Task Generalizability Coefficient Dependability Coefficient Generalizability Coefficient Dependability Coefficient Sensorimotor (Visual ROI).92.79.93.80 Breath Hold (Average across ROIs).92.86.94.88
13
Outer limit of reliability: Conclusions For simple sensorimotor and breath hold tasks, the reliability of intensity corrected measures of BOLD response for a region of interest can be very good to excellent. One month test-retest correlation coefficients for subtests of the Wechsler Adult Intelligence Test III for adults 30 to 54 ranges from.70 to.93. Consistency measures of fMRI reliability can be as good or better than that of well constructed psychological tests scores.
14
Are sensorimotor and breath hold tasks equally sensitive to site and subject effects?
15
Sensorimotor: Variance Components Analysis Percent Variance Accounted For in Visual ROI (GENOVA) FIPS Signed Magnitude Top 10% AFNI Variance Source Uncorrected Uncorrected Person18.24 16.67 Day 0.13 Neg Run 0.02. Neg Site 32.43 29.01 Person by Day Neg Person by Run 0.65Neg Person by Site 7.61 5.28 Person by Hemisphere 2.81 Neg Person 3-ways 34.02 41.78 Residual (4-way + ) 2.89 5.67
16
Breath Hold Task: Variance Components Analysis Percent Variance Accounted For (GENOVA) FIPS Signed Magnitude Top 10% AFNI Variance Source 6 ROIs 10 ROIs Person37.02 37.33 Day 0.27 0.03 Run 0.27. 0.30 Site 22.44 23.87 Person by Day 0.76 1.77 Person by Run 1.650.63 Person by Site 4.90 7.58 Person by Hemisphere - - Person 3-ways 13.46 25.93 Residual (4-way + ) 16.22 0.90
17
Sensorimotor – Breath Hold Task Comparison The sensorimotor task is more sensitive to site effects than to subject effects. The breath hold task is more sensitive to subject than to site effects.
18
How reliably did the Phase I traveling subjects study measure site variation?
19
Reliability of Site Differences Treat Site rather than Subject as Measurement Object Same variance components tables presented previously can be used to estimate how reliably site differences were measured across the study factors of run, day, and person.
20
Reliability of Site Differences: AFNI Analysis TaskConsistencyDependability Sensorimotor (visual ROI).97.92
21
Measurement of Site Variance Measurements of site variability provided by the Phase I traveling subject study were very reliable, at least in the visual region of interest.
22
How much unwanted variance can be reduced in multisite sensorimotor imaging data by breath hold calibration once between site differences in image intensity are controlled?
23
Breath Hold Correction of Sensorimotor Data Our previous work showed that breath hold calibration improved the dependability of native regression weights that were not intensity corrected. Site specific calibration: Subject tailored calibration
24
Reliability of Breath Hold Calibrated Intensity Corrected Values: Visual ROI FIPS Sign Mag Top 10% AFNI %Change Entire ROI Condition GeneralizabilityDependabilityGeneralizabilityDependability Uncorrected.92.79.93.80 Site Specific.93.83.94.86 Subject Tailored Analyses done but not double checked. Subject Tailored calibration appears to be no better than Site Specific calibration.
25
Reliability of Breath Hold Calibrated Intensity Corrected Values: Hand ROI FIPS Sign Mag Top 10% AFNI %Change Entire ROI Condition GeneralizabilityDependabilityGeneralizabilityDependability Uncorrected.92.81.88.84 Site Specific.92.83.87.85 Subject Tailored Analyses done but not double checked. Subject Tailored calibration appears to be no better than Site Specific calibration.
26
Visual ROI Task: Site Specific Correction Percent Variance Accounted For in Visual ROI (GENOVA) FIPS Signed Magnitude Top 10% AFNI %Change Entire ROI Variance Source Uncorrected Site Specific Breath Hold Corrected Uncorrected Site Specific Breath Hold Corrected Person18.24 20.0816.6720.87 Day 0.13.19 Neg Run 0.02.06 Neg 0.03 Site 32.43 27.0429.0121.47 Person by Day Neg Person by Run 0.65 0.67Neg1.62 Person by Site 7.61 7.29 5.286.95 Person by Hemisphere 2.812.26 Neg Person 3-ways 34.0237.47 41.7841.85 Residual (4-way + ) 2.893.55 5.67 5.89
27
Hand Area ROI: Site Specific Correction Percent Variance Accounted For in Hand ROI (GENOVA) FIPS Signed Magnitude Top 10% AFNI %Change Entire ROI Variance Source Uncorrected Site Specific Breath Hold Corrected Uncorrected Site Specific Breath Hold Corrected Person15.154 15.6923.32 23.68 Day Neg Run 0.46.46 0.0 Site 21.80 17.0311.287.81 Person by Day 0.41 0.70 Neg Person by Run 0.14 0.22Neg Person by Site 6.68 6.60 28.9231.26 Person by Hemisphere 0.580.25 0.380.29 Person 3-ways 38.6141.53 27.0427.66 Residual (4-way + ) 11.5112.31 6.477.60
28
% of Site Variance Reduced by Site- Specific Breath Hold Calibration: ROIFIPSAFNI Visual 16.62%27.88% Hand 21.88%30.76% %var uncorrected site factor - %var correct site factor %var uncorrected site factor
29
Conclusions on Breath Hold Correction (for intensity normalized MR images) Site specific breath hold calibration does not improve consistency measures of reliability, at least for the highly consistent fBIRN sensorimotor task. Site specific breath hold calibration produces modest increases in absolute agreement measures of reliability for the fBIRN sensorimotor task. Site specific breath hold calibration reduces the unwanted variance associated with site by 16% to 31%, depending on ROI and processing choices. Site specific breath hold calibration did not reduce unwanted person by site variance for intensity normalized MR image.
30
Statistical and Programming Integration WG Much of this work has migrated to other Statistics WGs, such as Data Processing The WG is working with the BIRN CC to implement the program work flow scheduling system, Condor, at the San Diego site.
31
Future Plans: Reliability and Calibration Complete Phase I variance components analysis of breath hold calibration Confirm subject specific analysis Complete analysis of auditory ROI Compare results from completely crossed design with run and day nested under site Compare traditional method of moments method for estimating variance components with Bayesian method Mean squares Perform a generalizability and variance components analysis of smooth-to correction Test newly developed calibration methods on Phase I data
32
Future Plans – Data Processing: Preprocessing Preprocess all image sets Complete artifact detection Upload preprocessed images and artifact detection log into the Federated Database Artifact correct images and upload corrected images
33
Data Processing Train FIPS Power Users Complete all subject level analyses and upload them into the database Extend FIPS to level II (several extensions might be required) Compare between group analyses plans Conventional fixed effects design with site and group Conventional fixed effects design with covariates Site specific covariate adjustment Pooled covariate adjustement Meta-analytic methods site-specific error-weighting
34
Future Plans - Statistical and Programming Integration Make the integration of FreeSurfer into FIPS pipeline more generally available. Complete an implementation of Condor at San Diego site.
35
Algorithm Development Extend work on independent components analysis done at Yale to multi-group and multi-task applications and incorporate into FIPS (perhaps through FSL Melodic). Extend work done at BWH/MIT on Multivariate Autoregressive (MAR) Model for effective connectivity analyses to multi-group context. Extend work with the expectation-maximization STAPLE method of analyzing inter-site differences to the voxel level. Further develop the UCI parametric response surface model and integrate into the analysis pipeline. Continue work on group classifiers.
36
Future Plans: Revise Aim 3 Current Aim 3: Develop statistical and machine learning tools to identify homogenous subgroups Proposed New Aim: Develop novel statistical and machine learning tools to analyze multisite imaging data. (eg. STAPLE, Independent Components Analysis for Multisite-Multitask image data, Parametric Response Surface Modeling, MAR, etc.) New aim would include the search for homogenous subgroups to the extent that it is feasible, but acknowledge other novel methods in development. New Aim elevates the creative work being done in the Statistical Working Group to a formal project goal.
37
Revise Statistics Workgroup Structure Data Processing Greg Brown Image Pipeline Forum Doug Greve, Lee Freidman Level II Statistical Modeling of MultiSite- MultiGroup Imaging Data
38
END
39
Timeline Train FIPS Power Users Vince’s work Sandy’s work Condor Activities last 6 months Data download Using FIPS Extending FIPS to second level analysis Phase II image analysis plan Variance Components analysis of Phase I images Future Plans
40
Testing the Testbed Hypothesis Testbed Hypothesis: Before a federated imaging database can be released to the medical and scientific community, it must be tested by performing a large-scale study involving patients. Alternative Hypothesis: (Field of Dreams Hypothesis): If you build it they will come.
41
Confirming the Testbed Hypothesis The Testbed Hypothesis is being confirmed (with a vengence) Revisions of the Testbed need to be programmed into our resource planning (especially personnel) What are the implications of the Testbed Thesis for the use of distributed imaging databases outside of arenas where they have been tested? (Eg., longitudinal studies, drug trials). Need for advocacy and exchange with medical scientists outside of BIRN
42
Statistics Update Discussion Points Aims Activities last 6 months Data download and Testing the Database Creation of Preprocessing Scripts Using FIPS Extending FIPS to second level analysis Phase II image analysis plan Variance Components analysis of Phase I images Pipeline Forum Future Plans
43
Subject Tailored Correction Percent Variance Accounted For in Visual ROI (GENOVA) FIPS Signed Magnitude Top 10% AFNI Variance Source Uncorrected Site Mean Breath Hold Corrected Uncorrected Site Mean Breath Hold Corrected Person18.24 37.0216.6737.33 Day 0.130.27 Neg.03 Run 0.02Neg.30 Site 32.43 22.4429.0123.87 Person by Day Neg0.76 Neg1.77 Person by Run 0.651.74Neg0.63 Person by Site 7.61 4.90 5.287.58 Person by Hemisphere 2.81 - Neg - Person 3-ways 34.0213.46 41.7825.93 Residual (4-way + ) 2.8916.22 5.67 0.90
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.