Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fisher Information Matrix of DESPOT

Similar presentations


Presentation on theme: "Fisher Information Matrix of DESPOT"— Presentation transcript:

1 Fisher Information Matrix of DESPOT
Dec 17, 2012

2 Theory The Fisher Information Matrix (FIM) is a key part of the Cramer-Rao Lower Bound (CRLB) It is a measure of how sensitive the output signals (SPGR and SSFP images) are to the input parameters (tissue characteristics: T1s, T1f, MWF, etc.) For an unbiased estimator, as is assumed in Lankford with a genetic algorithm, the FIM completely defines the CRLB

3 Theory gi = the signal equation of the ith image
θj = the jth tissue parameter Σ = the added noise correlation matrix scaled as per Lankford, 1e-4*M0 for SPGR and sqrt(3)*e-4*M0 for SSFP, assumed diagonal/independent σθj = CRLB precision bound on jth parameter for an unbiased estimator

4 The Jacobian Numerical estimation of the Jacobian matrix is performed for one of the tissues specified in Lankford (which comes from Gleaning) Sample tissue = X {'T1s','T1f','T2s','T2f','fF','kFS','offResonance','M0'}; { e3} With 7 (SPGR) + 9 (SSFP phase 0) + 9 (SSFP phase 180) = 25 images, the resulting size is 25x8 Looking at the Jacobian itself seems like it would be useful because it gives a feeling for both the sensitivity and specificity of a sequence to a tissue parameter Intrinsic signal level difference between sequences makes it difficult to compare, tried to account for this by normalizing by the signal value

5 Numerical Estimation of Partial Derivatives
There are a few ways to compute partial derivatives numerically Usually a constant step size, h, is assumed 2-point, 3-point, 5-point methods are then defined by calculating gi(X;θ) systematically over a discrete range of θ values For example for the 5-point method, θj±nk*h, where nk = 0,1,2 I compute the two simplest forms Forward difference = g(X; θj+h) - g(X; θj)/h Central difference = g(X; θj+h) - g(X; θj-h)/2h Forward difference error ∝ h, central difference error ∝ h2 Choosing h is a balancing act We want it as small as possible But don’t want to run into machine precision rounding errors

6 Step Size Importance h = 1e-4 h = 1e-9
Central difference converges to a good answer much faster than the first forward difference. Once h is small enough, they both look alike. In all following slides, h = 1e-9

7 Step Size Importance h = 1e-12
If you pick too small of a step size, you can run into precision errors This is an obvious case of that, at h = 1e-11, it’s more subtle In general, should use “good” region of h where values are accurate (small h) and converge between estimation methods Should see similar J across a good range of h

8 Step Size Importance – Rates
h = 1e-4 (Lankford) h = 1e-9 Lankford uses a large step size which seems to be okay when computing it w.r.t. relaxation rates instead of relaxation times. However, later plots show this may be an issue Matlab gives (more) warnings about scaling of inv(F) matrix when the problem is formulated in rates.

9 Methods h = 1e-9 for all following slides
Several protocols are compared: mcDESPOT 2pc (7 SPGR angles and 9+9 SSFP angles with phase 0 and phase 180 cycling) mcDESPOT 2pc mean norm – mean normalized version of the above within each subtype (SPGR, SSFP ph0, SSFP ph180) mcDESPOT 1pc – no phase 0 images, relevant because Lankford noticed that adding phase 0 significantly improved the precision 3pc – add 9 more SSFP phase 90 images As mentioned, let’s adopt some more clear terminology. mcDESPOT without the second phase cycle: maybe this is mcDESPOT traditional according to Lankford but let’s give it a clear name: mcDESPOT-1pc maybe? So that we can have a consistent notation such as mcDESPOT-2pc and mcDESPOT-3pc. Also I guess we use mcDESPOT when we are deriving T1s, T1f, etc from the data, whereas we use DESPOT2 when we are deriving single component T2 and DESPOTFM to refer to DESPOT2 with a second phase cycle? Maybe DESPOT2-1pc and DESPOT2-2pc is a consistent terminology there? I am often reluctant to invent new terminology if the existing conventions are good enough, but in this case let’s discuss it. I suggest you work in R1 instead of T1 units, as in Lankford, as I usually like R vs T myself.

10 2pc: Jacobian First 7 rows = SPGR Next 9 rows = SSFP phase 0
Last 9 rows = SSFP phase 180 The column labels for each tissue parameter are seen at the bottom. Unfortunately, the scale makes it different to compare the sensitivity of the sequences. The SSFP phase 180 has way more signal than all the others so it naturally varies more in raw signal values. It dominates the color scale.

11 2pc: J/S = Jacobian/Signal
First 7 rows = SPGR Next 9 rows = SSFP phase 0 Last 9 rows = SSFP phase 180 The column labels for each tissue parameter are seen at the bottom. We can adjust for that by normalizing by the signal level.

12 2pc: log(abs( J/S )) First 7 rows = SPGR Next 9 rows = SSFP phase 0
Last 9 rows = SSFP phase 180 Here we can see that SPGR is specific to T1 estimates (of course) and appears more sensitive to kFS than SSFP phase 0. The last column for M0 should is pretty much extraneous because M0 is linear with the signal equations, so normalizing by the signal value makes it constant. SSFP phase 180 is sensitive but specific to nearly nothing, thought perhaps fF and kFS has the greatest effect. Higher flips of SSFP phase 180 are less sensitive to off resonance because the signal curve smooths out. SSFP phase 0 is interestingly less sensitive to T2s, so it could be more specific to the fast pool. BR: what do you think is going with the sudden change a max angle of SSFP180 (or min angle of SSFP0)? Seems like the sensitivity changes abruptly there compared to other angles in the same set.

13 2pc mean norm: log(abs( J/S ))
First 7 rows = SPGR Next 9 rows = SSFP phase 0 Last 9 rows = SSFP phase 180 With mean normalization, the scale of the SPGR and SSFP phase 0 are now comparable to the SSFP phase Some strange holes appear in SSFP phase 180 at certain flips, this would mean those angles do not change much. I don’t think this is a step size issue because both the difference methods have the same result. Strange artifact of mean normalization if this is true, could be a clue into its deficiency. BR: The observation of these “holes” or crud in the sensitivity maps is definitely worth following up on! And yes, this may be more evidence that mean normalization is a bad thing. Seems to me that this is one clear outcome of your recent analysis and if we can confirm it, we should get ready to publish this plus other aspects of your analysis, as well as alternative mcDESPOT acquisition and reconstrution methods.

14 3pc: log(abs( J/S )) First 7 rows = SPGR Next 9 rows = SSFP phase 0
Last 9 rows = SSFP phase 90 The top ¾ of the image is the same as for mcDESPOT, as it should be. Notably, the phase 90 images are highly sensitive to off resonance. This is because the phase 90 point is on a steeper part of the SSFP vs phase curve. The phase 0 and phase 180 are both at local minima or maxima for an on-resonance tissue at any flip angle. In reality, since tissue is not necessarily on resonance, perhaps the goal is to sample the phase cycle dimension well enough so that we have at least some data on this steep part of the curve, though maybe not a whole flip angle range. The condition number is worse here compared to 2pc despite better performance in the CRLB. May not be a great heurisitc.

15 Unbiased Estimator, CRLB
Mean normalization would appear to be hurting us significantly. 3 phase cycles really helps us with off-resonance! Seems to provide more benefit than adding phase 0 (1pc -> 2pc). I don’t see that much improvement from 1pc to 2pc, which Lankford saw especially for fF. He calls it traditional vs B0-corrected mcDESPOT. This should be the same tissue as in the first graph of Fig 4. One key difference in the analysis: it’s not clear, but I think Lankford does not fit for off-resonance with mcDESPOT-1pc and assumes on-resonance, thus his traditional and “B0-corrected mcDESPOT assuming on-resonance” are the only ones actually using the same signal equation. I always fit for it.

16 eig(F) I thought this might be another interesting way of looking at the problem This is equivalent to the squared singular values of J. This plot shows the dimensionality of the Jacobian/FIM. The condition number is the ratio of the largest to smallest singular values of J. “This value was a rough estimate of the ratio of relative error in estimated parameters to the relative error in signal.” Therefore, a flat scree plot is desirable, e.g. the identity matrix would have all 1’s.

17 Rate – 2pc: log(abs( J/S ))
First 7 rows = SPGR Next 9 rows = SSFP phase 0 Last 9 rows = SSFP phase 180 h = 1e-4 is bad! Lankford uses the forward difference.

18 Rate – 2pc: log(abs( J/S ))
First 7 rows = SPGR Next 9 rows = SSFP phase 0 Last 9 rows = SSFP phase 180 h = 1e-9 better Formulating it as rates seems to show fF and kFS to have less sensitivity compared to the relaxation parameters, opposite of the relation with relaxation times. Perhaps that is to be expected given the reciprocal relationship of rate and time but further complicated understanding.

19 Rate – 2pc mean norm: log(abs( J/S ))
First 7 rows = SPGR Next 9 rows = SSFP phase 0 Last 9 rows = SSFP phase 180 h = 1e-9

20 Rate – Estimator, CRLB h = 1e-9
Interestingly, the mean normalized values seem to be EXACTLY the same as before. No apparent bug in code but doesn’t make sense to me. Most 1pc parameters around 10^0, lower than Lankford by about and order of 10. This would suggest mcDESPOT is as the edge of usability with a 10% precision.


Download ppt "Fisher Information Matrix of DESPOT"

Similar presentations


Ads by Google