Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods.

Slides:



Advertisements
Similar presentations
Design of Experiments Lecture I
Advertisements

Isoparametric Elements Element Stiffness Matrices
Welcome to PHYS 225a Lab Introduction, class rules, error analysis Julia Velkovska.
Estimation in Sampling
Simple Regression. Major Questions Given an economic model involving a relationship between two economic variables, how do we go about specifying the.
Sampling Distributions (§ )
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 6 The Standard Deviation as a Ruler and the Normal Model.
Experimental Uncertainties: A Practical Guide What you should already know well What you need to know, and use, in this lab More details available in handout.
Section 4.2 Fitting Curves and Surfaces by Least Squares.
1 META PDFs as a framework for PDF4LHC combinations Jun Gao, Joey Huston, Pavel Nadolsky (presenter) arXiv: , Parton.
Sept 2003PHYSTAT61 Uncertainties of Parton Distributions (II) Results.
October 2004CTEQ Meeting1 Parton distribution uncertainty and W and Z production at hadron colliders Dan Stump Department of Physics and Astronomy Michigan.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Hypothesis Testing: Type II Error and Power.
Sept 2003Phystat1 Uncertainties of Parton Distribution Functions Daniel Stump Michigan State University & CTEQ.
Level 1 Laboratories University of Surrey, Physics Dept, Level 1 Labs, Oct 2007 Handling & Propagation of Errors : A simple approach 1.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
May 2005CTEQ Summer School1 Uncertainties of Parton Distribution Functions Dan Stump Michigan State University.
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
Sept 2003PHYSTAT11 … of short-distance processes using perturbative QCD (NLO) The challenge of Global Analysis is to construct a set of PDF’s with good.
Sept 2003PHYSTAT21 Our treatment of systematic errors.
Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods.
Chapter 5 Sampling and Statistics Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Copyright © Cengage Learning. All rights reserved. 12 Simple Linear Regression and Correlation.
Statistics for Data Miners: Part I (continued) S.T. Balke.
A Neural Network MonteCarlo approach to nucleon Form Factors parametrization Paris, ° CLAS12 Europen Workshop In collaboration with: A. Bacchetta.
Copyright © 2010 Pearson Education, Inc. Chapter 6 The Standard Deviation as a Ruler and the Normal Model.
University of Ottawa - Bio 4118 – Applied Biostatistics © Antoine Morin and Scott Findlay 08/10/ :23 PM 1 Some basic statistical concepts, statistics.
Physics 114: Exam 2 Review Lectures 11-16
Physics 270 – Experimental Physics. Standard Deviation of the Mean (Standard Error) When we report the average value of n measurements, the uncertainty.
Slide 6-1 Copyright © 2004 Pearson Education, Inc.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Statistics - methodology for collecting, analyzing, interpreting and drawing conclusions from collected data Anastasia Kadina GM presentation 6/15/2015.
Summary Part 1 Measured Value = True Value + Errors = True Value + Errors Errors = Random Errors + Systematic Errors How to minimize RE and SE: (a)RE –
Ch. 3: Geometric Camera Calibration
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Ronan McNulty EWWG A general methodology for updating PDF sets with LHC data Francesco de Lorenzi*, Ronan McNulty (University College Dublin)
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
Survey – extra credits (1.5pt)! Study investigating general patterns of college students’ understanding of astronomical topics There will be 3~4 surveys.
Lecture 26 Molecular orbital theory II
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Inference: Probabilities and Distributions Feb , 2012.
Analysis of Experimental Data; Introduction
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
BME 353 – BIOMEDICAL MEASUREMENTS AND INSTRUMENTATION MEASUREMENT PRINCIPLES.
Treatment of correlated systematic errors PDF4LHC August 2009 A M Cooper-Sarkar Systematic differences combining ZEUS and H1 data  In a QCD fit  In a.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
From the population to the sample The sampling distribution FETP India.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
D Parton Distribution Functions, Part 2. D CT10-NNLO Parton Distribution Functions.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide 6- 1.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Pattern Recognition PhD Course.
Chapter 9 Hypothesis Testing.
Ellipse Fitting COMP 4900C Winter 2008.
HESSIAN vs OFFSET method
METHOD OF STEEPEST DESCENT
Uncertainties of Parton Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions July chonbuk national university.
Sampling Distributions (§ )
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
16. Mean Square Estimation
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods

Sept 2003PHYSTAT42 We continue to use  2 global as figure of merit. Explore the variation of  2 global in the neighborhood of the minimum. The Hessian method (  = … d) a1a1 a2a2 the standard fit, minimum  2 nearby points are also acceptable

Sept 2003PHYSTAT43 The finite step size must be direction dependent. We devised an iterative method The numerical computation of H  by finite differences is nontrivial: the eigenvalues of H  vary over many orders of magnitude, computation of  2 global is subject to small numerical errors leading to discontinuities as a function of {a  }. diagonalize rescale which converges to a good complete set of eigenvectors of   

Sept 2003PHYSTAT44 Classical error formula for a variable X(a) Obtain better convergence using eigenvectors of H  S  (+) and S  (  ) denote PDF sets displaced from the standard set, along the  directions of the  th eigenvector, by distance T =  (  2) in parameter space. ( available in the LHAPDF format : 2d alternate sets ) “Master Formula”

Sept 2003PHYSTAT45 Minimization of F [w.r.t {a  } and ] gives the best fit for the value X(a min,  ) of the variable X. Hence we obtain a curve of  2 global versus X. The Lagrange Multiplier Method … for analyzing the uncertainty of PDF- dependent predictions. The fitting function for constrained fits  : Lagrange multiplier controlled by the parameter

Sept 2003PHYSTAT46 The question of tolerance X : any variable that depends on PDF’s X 0 : the prediction in the standard set  2 (X) : curve of constrained fits For the specified tolerance (  2 = T 2 ) there is a corresponding range of uncertainty,   X. What should we use for T?

Sept 2003PHYSTAT47 Estimation of parameters in Gaussian error analysis would have T = 1 We do not use this criterion.

Sept 2003PHYSTAT48 Aside: The familiar ideal example Consider N measurements {  i } of a quantity  with normal errors {  i } Estimate  by minimization of  2, The mean of  combined is  true, the SD is The proof of this theorem is straightforward. It does not apply to our problem because of systematic errors. and ( =  /  N )

Sept 2003PHYSTAT49 Add a systematic error to the ideal model… Estimate  by minimization of  2 ( s : systematic shift,  : observable ) Then, letting, again and (for simplicity suppose  i =  ( =  2 /N +  2 )

Sept 2003PHYSTAT50 Reasons We keep the normalization factors fixed as we vary the point in parameter space. The criterion  2 = 1 requires that the systematic shifts be continually optimized versus {a  }. Systematic errors may be nongaussian. The published “standard deviations”  ij may be inaccurate. We trust our physics judgement instead. Still we do not apply the criterion  2 = 1 !

Sept 2003PHYSTAT51 Some groups do use the criterion of  2 = 1 for PDF error analysis. Often they are using limited data sets – e.g., an experimental group using only their own data. Then the  2 = 1 criterion may underestimate the uncertainty implied by systematic differences between experiments. An interesting compendium of methods by R. Thorne CTEQ6  2 = 100 ZEUS  2 = 50 (effective) MRST01  2 = 20 H1  2 = 1 Alekhin  2 = 1 GKK not using  

Sept 2003PHYSTAT52 To judge the PDF uncertainty, we return to the individual experiments. Lumping all the data together in one variable –  2 global – is too constraining. Global analysis is a compromise. All data sets should be fit reasonably well -- that is what we check. As we vary {a  }, does any experiment rule out the displacement from the standard set?

Sept 2003PHYSTAT53 In testing the goodness of fit, we keep the normalization factors (i.e., optimized luminosity shifts) fixed as we vary the shape parameters. End result e.g., ~100 for ~2000 data points. This does not contradict the  2 = 1 criterion used by other groups, because that refers to a different  2 in which the normalization factors are continually optimized as the {a  } vary.