Download presentation
Presentation is loading. Please wait.
1
Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods
2
Sept 2003PHYSTAT42 We continue to use 2 global as figure of merit. Explore the variation of 2 global in the neighborhood of the minimum. The Hessian method ( = 1 2 3 … d) a1a1 a2a2 the standard fit, minimum 2 nearby points are also acceptable
3
Sept 2003PHYSTAT43 The finite step size must be direction dependent. We devised an iterative method The numerical computation of H by finite differences is nontrivial: the eigenvalues of H vary over many orders of magnitude, computation of 2 global is subject to small numerical errors leading to discontinuities as a function of {a }. diagonalize rescale which converges to a good complete set of eigenvectors of
4
Sept 2003PHYSTAT44 Classical error formula for a variable X(a) Obtain better convergence using eigenvectors of H S (+) and S ( ) denote PDF sets displaced from the standard set, along the directions of the th eigenvector, by distance T = ( 2) in parameter space. ( available in the LHAPDF format : 2d alternate sets ) “Master Formula”
5
Sept 2003PHYSTAT45 Minimization of F [w.r.t {a } and ] gives the best fit for the value X(a min, ) of the variable X. Hence we obtain a curve of 2 global versus X. The Lagrange Multiplier Method … for analyzing the uncertainty of PDF- dependent predictions. The fitting function for constrained fits : Lagrange multiplier controlled by the parameter
6
Sept 2003PHYSTAT46 The question of tolerance X : any variable that depends on PDF’s X 0 : the prediction in the standard set 2 (X) : curve of constrained fits For the specified tolerance ( 2 = T 2 ) there is a corresponding range of uncertainty, X. What should we use for T?
7
Sept 2003PHYSTAT47 Estimation of parameters in Gaussian error analysis would have T = 1 We do not use this criterion.
8
Sept 2003PHYSTAT48 Aside: The familiar ideal example Consider N measurements { i } of a quantity with normal errors { i } Estimate by minimization of 2, The mean of combined is true, the SD is The proof of this theorem is straightforward. It does not apply to our problem because of systematic errors. and ( = / N )
9
Sept 2003PHYSTAT49 Add a systematic error to the ideal model… Estimate by minimization of 2 ( s : systematic shift, : observable ) Then, letting, again and (for simplicity suppose i = ( = 2 /N + 2 )
10
Sept 2003PHYSTAT50 Reasons We keep the normalization factors fixed as we vary the point in parameter space. The criterion 2 = 1 requires that the systematic shifts be continually optimized versus {a }. Systematic errors may be nongaussian. The published “standard deviations” ij may be inaccurate. We trust our physics judgement instead. Still we do not apply the criterion 2 = 1 !
11
Sept 2003PHYSTAT51 Some groups do use the criterion of 2 = 1 for PDF error analysis. Often they are using limited data sets – e.g., an experimental group using only their own data. Then the 2 = 1 criterion may underestimate the uncertainty implied by systematic differences between experiments. An interesting compendium of methods by R. Thorne CTEQ6 2 = 100 ZEUS 2 = 50 (effective) MRST01 2 = 20 H1 2 = 1 Alekhin 2 = 1 GKK not using
12
Sept 2003PHYSTAT52 To judge the PDF uncertainty, we return to the individual experiments. Lumping all the data together in one variable – 2 global – is too constraining. Global analysis is a compromise. All data sets should be fit reasonably well -- that is what we check. As we vary {a }, does any experiment rule out the displacement from the standard set?
13
Sept 2003PHYSTAT53 In testing the goodness of fit, we keep the normalization factors (i.e., optimized luminosity shifts) fixed as we vary the shape parameters. End result e.g., ~100 for ~2000 data points. This does not contradict the 2 = 1 criterion used by other groups, because that refers to a different 2 in which the normalization factors are continually optimized as the {a } vary.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.