Multi-exponential decomposition for MR imaging of HCC and fibrosis Giovanni Motta Jan 7, 2005.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Applications of one-class classification
Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
Pattern Recognition and Machine Learning
Computer vision: models, learning and inference
COMP 116: Introduction to Scientific Programming Lecture 11: Linear Regression.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2011 –47658 Determining ODE from Noisy Data 31 th CIE, Washington.
Image alignment Image from
Structure from motion.
Master thesis by H.C Achterberg
GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Announcements Take home quiz given out Thursday 10/23 –Due 10/30.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Fat Curves and Representation of Planar Figures L.M. Mestetskii Department of Information Technologies, Tver’ State University, Tver, Russia Computers.
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 11 Notes Class notes for ISE 201 San Jose State University.
Linear and generalised linear models
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Solving Systems of Linear Equations and Circles Adapted from Walch Education.
Radial Basis Function Networks
Magnetic Resonance Imaging 4
9.4 – Solving Quadratic Equations By Completing The Square
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Mathematics for Economics and Business Jean Soper chapter two Equations in Economics 1.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Chapter 9 Function Approximation
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Experimental Results on the Classification of UTE and McFlash Sequences Giovanni Motta Jan 21, 2005.
CSCE 643 Computer Vision: Structure from Motion
2.13 Use Square Roots to Solve Quadratics Example 1 Solve quadratic equations Solution Write original equation. 5 Solve the equation. Add __ to each side.
Describing Motion: Kinematics in One Dimension. Sub units Reference Frames and Displacement Average Velocity Instantaneous Velocity Acceleration Motion.
Chapter 2 Describing Motion: Kinematics in One Dimension.
1 E. Fatemizadeh Statistical Pattern Recognition.
1 MODELING MATTER AT NANOSCALES 4. Introduction to quantum treatments The variational method.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Section 3.6 Solving Decimal Equations Mr. Beltz & Mr. Sparks.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
Rounding to the nearest 10,100,1000. Large numbers are often approximated to the nearest 10,100,1000 etc.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
4.4 Absolute Value 11/14/12. Absolute Value: The distance of a number from 0 on a number line. Written as l x l Ex. |5| (distance of 5 from 0) = 5 Ex.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Instructor: Mircea Nicolescu Lecture 9
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Solving Multi-Step Equations INTEGRATED MATHEMATICS.
Chapter 3- Coordinate systems A coordinate system is a grid used to identify locations on a page or screen that are equivalent to grid locations on the.
Slide 2- 1 Copyright © 2012 Pearson Education, Inc. Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Lecture 16: Image alignment
Depth from disparity (x´,y´)=(x+D(x,y), y)
Solving Linear Inequalities in One Unknown
The Brightness Constraint
Parameter estimation class 5
Simple Linear Regression - Introduction
Chapter 9 Function Approximation
Ellipse Fitting COMP 4900C Winter 2008.
The Brightness Constraint
Linear Models and Equations
Image Registration 박성진.
Back to equations of geometric transformations
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Multi-exponential decomposition for MR imaging of HCC and fibrosis Giovanni Motta Jan 7, 2005

Sequences UTE Fat saturation 4 echoes 20 sequences 256x256 (4) or 320x320 (16) –TE = 0.08, 3.25, 6.42 and 9.59ms (2) –TE = 0.08, 4.53, 8.98 and 13.5ms (11) –TE = 0.08, 5.81, 11.6 and 17.4ms (4) –TE = 0.08, 6.90, 13.8 and 19.6ms (3) One slice from each sequence

Example UTE_0015: TE = 0.08, 5.81, 11.6 and 17.4ms, 320x320 pixels

Model Voxel value is proportional to the transverse magnetization of the corresponding volume Transverse magnetization decays exponentially with TE The time behavior of a voxel can be described by a linear combination of exponentials (plus a residual error)

Model Exponentials are the basis functions used in the decomposition

Method Given the four echoes We want to solve With respect to and

Method An exact solution is not always possible, so we look for an approximation that minimizes the error Where

Example UTE_0015: voxel of coordinates (208, 63) Best fit with T2 A =13 and T2 B =6:

Advantages Short Term –Allows generation of synthetic images for arbitrary TE –Exponentials and reconstruction error can be isolated and imaged individually –Subtracting the reconstruction error from the image provides a form of denoising Long Term –The parameters of this representation can be used in the classification of the voxels

Main Assumption The parameters of this decomposition are an advantageous way of representing all the information necessary for the classification

Experiments with two exponentials Set 1Var 00 Set 2Var Set 3Var 115Var (±2) In experiment sets 1 and 3 the system is non linear Unknowns can only assume non negative values

Experiments with four exponentials T2 =100, 20, 10 and 5ms. System of equations is linear Non negativity constraints on M

Supervised Classification Rudimentary nearest neighborhood classification Voxels are represented by the parameters of the decomposition Fixed a set of parameters (target), find the voxels that that have parameters closer than a predetermined amount (threshold) Distance is measured by the squared error between the two sets of parameters

Example Matlab program Experiment sets 2 and 3 Set 1Var 00 Set 2Var Set 3Var 115Var (±2)

Unupervised Classification Voxels are represented by the parameters of the decomposition Fixed the number of classes, partition the voxels into classes so that voxels belonging to the same class have similar parameters Distance is measured by the squared error

Unsupervised Classification Two exponentials T2=20 and 5ms. 8 classes (left) and 16 classes (right)

Unsupervised Classification Four exponentials T2=100, 20, 10 and 5ms. 8 classes (left) and 16 classes (right)

What’s Next? Speed up decomposition Verify assumptions (linearity, for ex.) More echoes Sub pixel operations Registration of different sequences Classification Integration with anatomic info …