Download presentation
Presentation is loading. Please wait.
Published byHoward Lane Modified over 9 years ago
2
A Neural Network MonteCarlo approach to nucleon Form Factors parametrization Paris, 10-03-2011. 2° CLAS12 Europen Workshop In collaboration with: A. Bacchetta – University of Pavia M. Guagnelli – INFN Pavia M. Guagnelli – INFN Pavia J. Rojo – INFN Milano J. Rojo – INFN Milano
3
Drawbacks A big value of the final χ 2 (indicating a bad fit) does not directly translates into larger parameter errors, only driven by the form of the Hessian at the minimum estimates of the parameter errors, only driven by the form of the Hessian at the minimum The standard statistical prescription Δχ 2 =1 often yields unrealistically small errors, due to incompatible data setssystematics and theoretical uncertainties the incompatible data sets or to the presence of systematics and theoretical uncertainties linear error propagation The validity of linear error propagation is assumed heavily depend on the functional form Error estimates heavily depend on the functional form chosen for the parametrizations Error propagation from data to model parameters and to parameters to observables in not trivial! Standard Hessian method the covariance matrix determines both parameter errors (diagonal elements, including correlations) and error propagation to generic observables
4
The theoretical bias introduced by the specific functional form adopted to fit the exp. data is difficult to assess, but it may have significant impact in applications Caveats Simple, physics-inspired functions (based on theoretical constraints at small and large Q 2, large model-dependence etc.) imply a large model-dependence for the corresponding predictions and error estimates the behaviour in the extrapolation regions is strictly determined by the choice of In particular, the behaviour in the extrapolation regions is strictly determined by the choice of the model function the model function, and does not fully reflect the present degree of ignorance in those ranges Lagrange Multiplier method overcomes the linear and quadratic approximations, but still needs non-standard Δχ 2 tolerance criteria and requires a full refitting each time uncertainties on a different observable are wanted
5
MonteCarlo approach: perform global fits on an ensemble of N rep artificial replicas of the original data, obtained by means of importance-sampled MC random generation it does not rely on linear error propagation it does not rely on linear error propagation test the implications of non-gaussian distributions of the experimental data and of the fitted model functions a MC sampling of the probability measure in the function space of FFs it provides a MC sampling of the probability measure in the function space of FFs computationally demanding computationally demanding (N rep =10 for central values, 100 for errors and 1000 for correlations) Expectation values of observables depending on FFs: functional integrals over function space approximated by ensemble averages over the set of N rep best fit model replicas
6
Neural Networks: a class of particularly flexible non-linear statistical data modeling tools, used to describe complex relationships and find patterns in data unbiased parametrization of the by choosing a stable and sufficiently redundant architecture unbiased parametrization of the data data, i.e. independent on model assumptions and on the particular functional form adopted extrapolation regions behaviour in extrapolation regions NOT driven by the functions shape in the data regions supervised learning training algorithms many efficient supervised learning training algorithms available in literature overlearning risk overlearning risk must be properly tamed regularization
7
2. MC generation of artificial data replicas 3. Neural Network fits to MC replicas, minimizing the error function w.r.t. weights-thresholds 1. Experimental data set (electric-magnetic p-n FFs) 4. Ensemble of best-fit model functions. Everything (central values, error bands, error propagation) descending from sample statistical estimators
8
Spreading of the best fit curves Nrep=200, M=4, constraints at Q 2 =0 hard-wired in the model fuctions
9
1σ and 68% CL error bands: Main differences in extrapolation regions, pointing at a NON-gaussian distribution of best fit functions
10
Neural MonteCarlo approach Neural MonteCarlo approach provides a powerful tool to parametrize form factors world data in a statistically rigorous way an unbiased global fit It ensures an unbiased global fit, independent of the adopted functional form, thanks to NN redundancy statistical Errors estimation and propagation is simply based on the statistical features of the best fit ensemble features of the best fit ensemble; no approximation is needed assess and include the effect of new data It is possible to assess and include the effect of new data through a Bayesian reweighting, without the need to perform a full re-fit Many possible applications Many possible applications: useful for observables highly sensitive to FFs
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.