Jeff J. Orchard, M. Stella Atkins School of Computing Science, Simon Fraser University Freire et al. (1) pointed out that least squares based registration.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Pattern Recognition and Machine Learning
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
The General Linear Model Or, What the Hell’s Going on During Estimation?
OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear.
P M V Subbarao Professor Mechanical Engineering Department
Realigning and Unwarping MfD
Visual Recognition Tutorial
Spacecraft Attitude Determination Using GPS Signals C1C Andrea Johnson United States Air Force Academy.
Chapter 5: Linear Discriminant Functions
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Curve-Fitting Regression
Motion Analysis (contd.) Slides are from RPI Registration Class.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Region of Interests (ROI) Extraction and Analysis in Indexing and Retrieval of Dynamic Brain Images Researcher: Xiaosong Yuan, Advisors: Paul B. Kantor.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
GG313 Lecture 3 8/30/05 Identifying trends, error analysis Significant digits.
Chi Square Distribution (c2) and Least Squares Fitting
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Adaptive Signal Processing
Regression Analysis British Biometrician Sir Francis Galton was the one who used the term Regression in the later part of 19 century.
Calibration & Curve Fitting
Least-Squares Regression
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Computer Engineering Majors Authors: Autar Kaw
APPLICATIONS OF INTEGRATION 6. A= Area between f and g Summary In general If.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
1 As we have seen in section 4 conditional probability density functions are useful to update the information about an event based on the knowledge about.
3D SLAM for Omni-directional Camera
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
Physics 114: Exam 2 Review Lectures 11-16
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Curve-Fitting Regression
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Conclusions The success rate of proposed method is higher than that of traditional MI MI based on GVFI is robust to noise GVFI based on f1 performs better.
Xianwu Ling Russell Keanini Harish Cherukuri Department of Mechanical Engineering University of North Carolina at Charlotte Presented at the 2003 IPES.
Numerical Methods Part: False-Position Method of Solving a Nonlinear Equation
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Jeff J. Orchard, M. Stella Atkins School of Computing Science, Simon Fraser University The AIR package performs registration using only tri-linear interpolation.
Statistical Parametric Mapping Lecture 11 - Chapter 13 Head motion and correction Textbook: Functional MRI an introduction to methods, Peter Jezzard, Paul.
Functional Brain Signal Processing: EEG & fMRI Lesson 14
Chapter 8: Simple Linear Regression Yang Zhenlin.
Conclusions Simulated fMRI phantoms with real motion and realistic susceptibility artifacts have been generated and tested using SPM2. Image distortion.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
CHAPTER- 3.2 ERROR ANALYSIS. 3.3 SPECIFIC ERROR FORMULAS  The expressions of Equations (3.13) and (3.14) were derived for the general relationship of.
Numerical Methods Part: False-Position Method of Solving a Nonlinear Equation
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
6.1 Areas Between Curves In this section we learn about: Using integrals to find areas of regions that lie between the graphs of two functions. APPLICATIONS.
CORRELATION-REGULATION ANALYSIS Томский политехнический университет.
R. Kass/Sp07P416/Lecture 71 More on Least Squares Fit (LSQF) In Lec 5, we discussed how we can fit our data points to a linear function (straight line)
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
HST 583 fMRI DATA ANALYSIS AND ACQUISITION
Probability Theory and Parameter Estimation I
Boundary Element Analysis of Systems Using Interval Methods
The General Linear Model (GLM): the marriage between linear systems and stats FFA.
Multi-modality image registration using mutual information based on gradient vector flow Yujun Guo May 1,2006.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Confidence as Bayesian Probability: From Neural Origins to Behavior
Consider Covariance Analysis Example 6.9, Spring-Mass
11. Conditional Density Functions and Conditional Expected Values
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
11. Conditional Density Functions and Conditional Expected Values
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Presentation transcript:

Jeff J. Orchard, M. Stella Atkins School of Computing Science, Simon Fraser University Freire et al. (1) pointed out that least squares based registration methods are sensitive to the BOLD signal present in fMRI experiments, resulting in stimulus-correlated motion errors. References 1. Freire L, Mangin J-F. Motion correction algorithms of the brain mapping community create spurious functional activations”, IPMI 2001: Fig 6 plots the motion parameter estimates calculated by the least squares registration algorithm and the theoretical estimate of the error, based on Eqn [5]. The shape and distribution of the region of activation affects the motion errors. The theory outlined here is a tool that can be used to study how the region of activation impacts the motion estimates. A least squares algorithm that incorporates both the motion and activation simultaneously might avoid the interference between the two. Volume Registration Least squares rigid-body registration involves finding a transformation T that optimally aligns volumes U and V. Thus, we seek a solution (in the least squares sense) to the equation, where G holds the gradients of V with respect to the motion parameters. Thus, Eqn [1] can be re-written, The volume U can also be approximated as a copy of V with added activation, BOLD Signal where A is the known activation map, and  is the level of activation in U. Linear Error Estimate Thus, for small motions, we can combine Eqns [2] and [3] to get, Roll Degrees Pitch Degrees Yaw Volume Number Degrees Superior mm Left mm Posterior Volume Number mm A simulated dataset of 40 volumes was created by duplicating an EPI volume with dimensions 64x64x30 (Fig 1), adding activation (Fig 2) and 2.5% Gaussian noise. No motion was added to the dataset. Using a linear stimulus function (Fig 4), a linear trend was also observed in the motion estimates (Fig 5). Volume Number Activation Level Fig 4. Linear Stimulus Function Fig 5. Motion estimates for a motionless dataset containing activation governed by the linear stimulus function shown in Fig 4. Notice the linear trends in the motion estimates. Roll Degrees Pitch Degrees Yaw Volume Number Degrees Superior mm Left mm Posterior Volume Number mm Fig 3. Motion estimates for a motionless dataset containing activation governed by the stimulus function shown in Fig 2. Notice the high correlation with the stimulus function. The lease squares solution of Eqn [4] can be found for (the motion parameters). If we assume that U and V are properly aligned, this gives the first-order error in the motion estimate due to activation. Simulations [5] Roll Degrees Pitch Degrees Yaw Volume Number Degrees Superior mm Left mm Posterior Volume Number mm Fig 6. Motion estimates of the least squares registration algorithm (blue) and the theoretical model (red). The theoretical model tracks the actual registration error from the least squares registration algorithm. In this case, the model does particularly well at predicting the translation in the anterior/posterior direction, and the pitch angle. Conclusions Future Directions We have shown that the BOLD signal from activation does, in fact, influence the motion estimates calculated by least squares registration algorithms. This work offers a theoretical justification for the errors observed in least squares registration algorithms due to the presence of the BOLD signal. [2] [3] [1] More detail... Depending on where linear approximations are applied, different approximation formulas can be derived for the same quantity. For example, from Eqn [2] we can solve for the motion parameters to get, Compare the right-hand sides of Eqns [5] and [6]. They are the same, except Eqn [6] has a large, additional matrix. For small motions, that matrix is very close to the identity matrix, and numerical simulations yield almost identical results. Similar arguments can be made for other approximating formulas. At convergence of the registration algorithm, we have 0  G + (U-V). Since we can approximate U as T (V+  A), which can in turn be approximated linearly, we get, [6] where is the gradient of A with respect to the motion parameters, and holds the motion estimates (negated because the motion approximation is applied after the addition of the activation, instead of before, as above) Volume Number Activation Level Fig 2. Stimulus Function Fig 1. Slice from the original EPI volume. The overlay shows the region of activation. + Fig 3 shows the least squares motion estimates for the simulated dataset. The dataset is motionless, so any detected motion is erroneous. Notice the high correlation between the motion plots and the stimulus function in Fig 2. [4] where holds the 6 motion parameters. For small motions, the transformation can be linearly approximated by, Our goal is to offer a theoretical justification for these BOLD-induced registration errors. Acknowledgments The authors thank Dr. Bruce Bjornson of the British Columbia’s Children’s Hospital for helpful discussions. Also, this work was supported in part by the Natural Sciences and Engineering Research Council of Canada.