Statistical Methods: Parameter Estimation

Slides:



Advertisements
Similar presentations
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Advertisements

1 Bayesian statistical methods for parton analyses Glen Cowan Royal Holloway, University of London DIS 2006.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Visual Recognition Tutorial
Statistical Data Analysis Stat 3: p-values, parameter estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
1 Nuisance parameters and systematic uncertainties Glen Cowan Royal Holloway, University of London IoP Half.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
G. Cowan 2011 CERN Summer Student Lectures on Statistics / Lecture 41 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability.
G. Cowan RHUL Physics Statistical Methods for Particle Physics / 2007 CERN-FNAL HCP School page 1 Statistical Methods for Particle Physics (2) CERN-FNAL.
G. Cowan Lectures on Statistical Data Analysis Lecture 14 page 1 Statistical Data Analysis: Lecture 14 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 13 page 1 Statistical Data Analysis: Lecture 13 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 7 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Maximum likelihood (ML)
Lecture II-2: Probability Review
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
G. Cowan St. Andrews 2012 / Statistics for HEP / Lecture 31 Statistics for HEP Lecture 3: Further topics 69 th SUSSP LHC Physics St. Andrews August,
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
G. Cowan CERN Academic Training 2012 / Statistics for HEP / Lecture 41 Statistics for HEP Lecture 4: Unfolding Academic Training Lectures CERN, 2–5 April,
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan SUSSP65, St Andrews, August 2009 / Statistical Methods 1 page 1 Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Lecture 5 1 Probability Definition, Bayes’ theorem, probability densities and their properties,
G. Cowan INFN School of Statistics, Vietri Sul Mare, 3-7 June Statistical Methods (Lectures 2.1, 2.2) INFN School of Statistics Vietri sul Mare,
1 Nuisance parameters and systematic uncertainties Glen Cowan Royal Holloway, University of London IoP Half.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Statistics for HEP Lecture 1: Introduction and basic formalism
Virtual University of Pakistan
Physics 114: Lecture 13 Probability Tests & Linear Fitting
12. Principles of Parameter Estimation
Probability Theory and Parameter Estimation I
Model Inference and Averaging
Lecture 4 1 Probability (90 min.)
Statistical Methods For Engineers
Graduierten-Kolleg RWTH Aachen February 2014 Glen Cowan
Discrete Event Simulation - 4
Computing and Statistical Data Analysis / Stat 8
Discussion on fitting and averages including “errors on errors”
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 3 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Terascale Statistics School DESY, February, 2018
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for Particle Physics Lecture 3: Parameter Estimation
Statistical Tests and Limits Lecture 2: Limits
Computing and Statistical Data Analysis / Stat 7
Statistical Methods for Particle Physics Lecture 4: unfolding
Introduction to Unfolding
Parametric Methods Berlin Chen, 2005 References:
Introduction to Statistics − Day 4
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
12. Principles of Parameter Estimation
Computing and Statistical Data Analysis / Stat 10
Applied Statistics and Probability for Engineers
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Statistical Methods: Parameter Estimation https://indico.desy.de/conferenceDisplay.py?confId=16328 Terascale Statistics School DESY, 6-10 March, 2017 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Rough outline I. Basic ideas of parameter estimation II. The method of Maximum Likelihood (ML) Variance of ML estimators Extended ML III. Method of Least Squares (LS) IV. Bayesian parameter estimation V. Goodness of fit from the likelihood ratio VI. Examples of frequentist and Bayesian approaches VII. Unfolding G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Parameter estimation The parameters of a pdf are constants that characterize its shape, e.g. r.v. parameter Suppose we have a sample of observed values: We want to find some function of the data to estimate the parameter(s): ← estimator written with a hat Sometimes we say ‘estimator’ for the function of x1, ..., xn; ‘estimate’ for the value of the estimator with a particular data set. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Properties of estimators If we were to repeat the entire measurement, the estimates from each would follow a pdf: best large variance biased We want small (or zero) bias (systematic error): → average of repeated measurements should tend to true value. And we want a small variance (statistical error): → small bias & variance are in general conflicting criteria G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

An estimator for the mean (expectation value) Parameter: Estimator: (‘sample mean’) We find: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

An estimator for the variance Parameter: Estimator: (‘sample variance’) We find: (factor of n-1 makes this so) where G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The likelihood function Suppose the entire result of an experiment (set of measurements) is a collection of numbers x, and suppose the joint pdf for the data x is a function that depends on a set of parameters θ: Now evaluate this function with the data obtained and regard it as a function of the parameter(s). This is the likelihood function: (x constant) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The likelihood function for i.i.d.*. data * i.i.d. = independent and identically distributed Consider n independent observations of x: x1, ..., xn, where x follows f (x; θ). The joint pdf for the whole data sample is: In this case the likelihood function is (xi constant) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Maximum likelihood estimators If the hypothesized θ is close to the true value, then we expect a high probability to get data like that which we actually found. So we define the maximum likelihood (ML) estimator(s) to be the parameter value(s) for which the likelihood is maximum. ML estimators not guaranteed to have any ‘optimal’ properties, (but in practice they’re very good). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML example: parameter of exponential pdf Consider exponential pdf, and suppose we have i.i.d. data, The likelihood function is The value of τ for which L(τ) is maximum also gives the maximum value of its logarithm (the log-likelihood function): G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML example: parameter of exponential pdf (2) Find its maximum by setting → Monte Carlo test: generate 50 values using t = 1: We find the ML estimate: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML example: parameter of exponential pdf (3) For the exponential distribution one has for mean, variance: For the ML estimator we therefore find → → G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Functions of ML estimators Suppose we had written the exponential pdf as i.e., we use λ = 1/τ. What is the ML estimator for λ? For a function (with unique inverse) λ(τ) of a parameter τ, it doesn’t matter whether we express L as a function of λ or τ. The ML estimator of a function λ(τ) is simply So for the decay constant we have Caveat: is biased, even though is unbiased. Can show (bias →0 for n →∞) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of ML: parameters of Gaussian pdf Consider independent x1, ..., xn, with xi ~ Gaussian (μ,σ2) The log-likelihood function is G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of ML: parameters of Gaussian pdf (2) Set derivatives with respect to μ, σ2 to zero and solve, We already know that the estimator for μ is unbiased. But we find, however, so ML estimator for σ2 has a bias, but b→0 for n→∞. Recall, however, that is an unbiased estimator for σ2. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Variance of estimators: Monte Carlo method Having estimated our parameter we now need to report its ‘statistical error’, i.e., how widely distributed would estimates be if we were to repeat the entire measurement many times. One way to do this would be to simulate the entire experiment many times with a Monte Carlo program (use ML estimate for MC). For exponential example, from sample variance of estimates we find: Note distribution of estimates is roughly Gaussian − (almost) always true for ML in large sample limit. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Variance of estimators from information inequality The information inequality (RCF) sets a lower bound on the variance of any estimator (not only ML): Minimum Variance Bound (MVB) Often the bias b is small, and equality either holds exactly or is a good approximation (e.g. large data sample limit). Then, Estimate this using the 2nd derivative of ln L at its maximum: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Variance of estimators: graphical method Expand ln L (θ) about its maximum: First term is ln Lmax, second term is zero, for third term use information inequality (assume equality): i.e., → to get , change θ away from until ln L decreases by 1/2. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of variance by graphical method ML example with exponential: Not quite parabolic ln L since finite sample size (n = 50). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Information inequality for n parameters Suppose we have estimated n parameters The (inverse) minimum variance bound is given by the Fisher information matrix: The information inequality then states that V - I-1 is a positive semi-definite matrix, where Therefore Often use I-1 as an approximation for covariance matrix, estimate using e.g. matrix of 2nd derivatives at maximum of L. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of ML with 2 parameters Consider a scattering angle distribution with x = cos θ, or if xmin < x < xmax, need always to normalize so that Example: α = 0.5, β = 0.5, xmin = -0.95, xmax = 0.95, generate n = 2000 events with Monte Carlo. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of ML with 2 parameters: fit result Finding maximum of ln L(α, β) numerically (MINUIT) gives N.B. No binning of data for fit, but can compare to histogram for goodness-of-fit (e.g. ‘visual’ or χ2). (Co)variances from (MINUIT routine HESSE) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two-parameter fit: MC study Repeat ML fit with 500 experiments, all with n = 2000 events: Estimates average to ~ true values; (Co)variances close to previous estimates; marginal pdfs approximately Gaussian. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The ln Lmax - 1/2 contour For large n, ln L takes on quadratic form near maximum: The contour is an ellipse: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

(Co)variances from ln L contour The α, β plane for the first MC data set → Tangent lines to contours give standard deviations. → Angle of ellipse φ related to correlation: Correlations between estimators result in an increase in their standard deviations (statistical errors). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extended ML Sometimes regard n not as fixed, but as a Poisson r.v., mean ν. Result of experiment defined as: n, x1, ..., xn. The (extended) likelihood function is: Suppose theory gives ν = ν(θ), then the log-likelihood is where C represents terms not depending on θ. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extended ML (2) Example: expected number of events where the total cross section σ(θ) is predicted as a function of the parameters of a theory, as is the distribution of a variable x. Extended ML uses more info → smaller errors for Important e.g. for anomalous couplings in e+e- → W+W- If ν does not depend on θ but remains a free parameter, extended ML gives: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extended ML example Consider two types of events (e.g., signal and background) each of which predict a given pdf for the variable x: fs(x) and fb(x). We observe a mixture of the two event types, signal fraction = θ, expected total number = ν, observed total number = n. Let goal is to estimate μs, μb. → G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extended ML example (2) Monte Carlo example with combination of exponential and Gaussian: Maximize log-likelihood in terms of μs and μb: Here errors reflect total Poisson fluctuation as well as that in proportion of signal/background. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extended ML example: an unphysical estimate A downwards fluctuation of data in the peak region can lead to even fewer events than what would be obtained from background alone. Estimate for μs here pushed negative (unphysical). We can let this happen as long as the (total) pdf stays positive everywhere. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Unphysical estimators (2) Here the unphysical estimator is unbiased and should nevertheless be reported, since average of a large number of unbiased estimates converges to the true value (cf. PDG). Repeat entire MC experiment many times, allow unphysical estimates: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML with binned data Often put data into a histogram: Hypothesis is where If we model the data as multinomial (ntot constant), then the log-likelihood function is: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML example with binned data Previous example with exponential, now put data into histogram: Limit of zero bin width → usual unbinned ML. If ni treated as Poisson, we get extended log-likelihood: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Relationship between ML and Bayesian estimators In Bayesian statistics, both θ and x are random variables: Recall the Bayesian method: Use subjective probability for hypotheses (θ); before experiment, knowledge summarized by prior pdf π(θ); use Bayes’ theorem to update prior in light of data: Posterior pdf (conditional pdf for θ given x) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML and Bayesian estimators (2) Purist Bayesian: p(θ | x) contains all knowledge about θ. Pragmatist Bayesian: p(θ | x) could be a complicated function, → summarize using an estimator Take mode of p(θ | x) , (could also use e.g. expectation value) What do we use for π(θ)? No golden rule (subjective!), often represent ‘prior ignorance’ by π(θ) = constant, in which case But... we could have used a different parameter, e.g., λ = 1/θ, and if prior πθ(θ) is constant, then πλ(λ) = πθ(θ(λ)) |dθ/dλ| is not! ‘Complete prior ignorance’ is not well defined. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Priors from formal rules Because of difficulties in encoding a vague degree of belief in a prior, one often attempts to derive the prior from formal rules, e.g., to satisfy certain invariance principles or to provide maximum information gain for a certain set of measurements. Often called “objective priors” Form basis of Objective Bayesian Statistics The priors do not reflect a degree of belief (but might represent possible extreme cases). In a Subjective Bayesian analysis, using objective priors can be an important part of the sensitivity analysis. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Priors from formal rules (cont.) In Objective Bayesian analysis, can use the intervals in a frequentist way, i.e., regard Bayes’ theorem as a recipe to produce an interval with certain coverage properties. For a review see: Formal priors have not been widely used in HEP, but there is recent interest in this direction; see e.g. L. Demortier, S. Jain and H. Prosper, Reference priors for high energy physics, arxiv:1002.1111 (Feb 2010) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Jeffreys’ prior According to Jeffreys’ rule, take prior according to where is the Fisher information matrix. One can show that this leads to inference that is invariant under a transformation of parameters. For a Gaussian mean, the Jeffreys’ prior is constant; for a Poisson mean μ it is proportional to 1/√μ. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

“Invariance of inference” with Jeffreys’ prior Suppose we have a parameter θ, to which we assign a prior πθ(θ). An experiment gives data x, modeled by L(θ) = P(x|θ). Bayes’ theorem then tells us the posterior for θ: Now consider a function η(θ), and we want the posterior P(η|x). This must follow from the usual rules of transformation of random variables: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

“Invariance of inference” with Jeffreys’ prior (2) Alternatively, we could have just starting with η as the parameter in our model, and written down a prior pdf πη(η). Using it, we express the likelihood as L(η) = P(x|η) and write Bayes’ theorem as If the priors really express our degree of belief, then they must be related by the usual laws of probability πη(η) = πθ(θ(η)) |dθ/dη|, and in this way the two approaches lead to the same result. But if we choose the priors according to “formal rules”, then this is not guaranteed. For the Jeffrey’s prior, however, it does work! Using πθ(θ) ∝√I(θ) and transforming to find P(η|x) leads to the same as using πη(η) ∝√I(η) directly with Bayes’ theorem. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Jeffreys’ prior for Poisson mean Suppose n ~ Poisson(μ). To find the Jeffreys’ prior for μ, So e.g. for μ = s + b, this means the prior π(s) ~ 1/√(s + b), which depends on b. But this is not designed as a degree of belief about s. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The method of least squares Suppose we measure N values, y1, ..., yN, assumed to be independent Gaussian r.v.s with Assume known values of the control variable x1, ..., xN and known variances We want to estimate θ, i.e., fit the curve to the data points. The likelihood function is G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The method of least squares (2) The log-likelihood function is therefore So maximizing the likelihood is equivalent to minimizing Minimum defines the least squares (LS) estimator Very often measurement errors are ~Gaussian and so ML and LS are essentially the same. Often minimize χ2 numerically (e.g. program MINUIT). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

LS with correlated measurements If the yi follow a multivariate Gaussian, covariance matrix V, Then maximizing the likelihood is equivalent to minimizing G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Linear LS problem G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Linear LS problem (2) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Linear LS problem (3) Equals MVB if yi Gaussian) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of least squares fit Fit a polynomial of order p: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Variance of LS estimators In most cases of interest we obtain the variance in a manner similar to ML. E.g. for data ~ Gaussian we have and so 1.0 or for the graphical method we take the values of θ where G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two-parameter LS fit G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness-of-fit with least squares The value of the χ2 at its minimum is a measure of the level of agreement between the data and fitted curve: It can therefore be employed as a goodness-of-fit statistic to test the hypothesized functional form λ(x; θ). We can show that if the hypothesis is correct, then the statistic t = χ2min follows the chi-square pdf, where the number of degrees of freedom is nd = number of data points - number of fitted parameters G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness-of-fit with least squares (2) The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if χ2min ≈ nd the fit is ‘good’. More generally, find the p-value: This is the probability of obtaining a χ2min as high as the one we got, or higher, if the hypothesis is correct. E.g. for the previous example with 1st order polynomial (line), whereas for the 0th order polynomial (horizontal line), G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness-of-fit vs. statistical errors G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness-of-fit vs. stat. errors (2) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

LS with binned data G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

LS with binned data (2) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

LS with binned data — normalization G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

LS normalization example G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit from the likelihood ratio Suppose we model data using a likelihood L(μ) that depends on N parameters μ = (μ1,..., μΝ). Define the statistic Value of tμ reflects agreement between hypothesized μ and the data. Good agreement means μ ≈ μ, so tμ is small; Larger tμ means less compatibility between data and μ. Quantify “goodness of fit” with p-value: ⌃ G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Likelihood ratio (2) Now suppose the parameters μ = (μ1,..., μΝ) can be determined by another set of parameters θ = (θ1,..., θM), with M < N. E.g. in LS fit, use μi = μ(xi; θ) where x is a control variable. Define the statistic fit M parameters fit N parameters Use qμ to test hypothesized functional form of μ(x; θ). To get p-value, need pdf f(qμ|μ). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Wilks’ Theorem (1938) Wilks’ Theorem: if the hypothesized parameters μ = (μ1,..., μΝ) are true then in the large sample limit (and provided certain conditions are satisfied) tμ and qμ follow chi-square distributions. For case with μ = (μ1,..., μΝ) fixed in numerator: Or if M parameters adjusted in numerator, degrees of freedom G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit with Gaussian data Suppose the data are N independent Gaussian distributed values: want to estimate known Likelihood: Log-likelihood: ML estimators: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Likelihood ratios for Gaussian data The goodness-of-fit statistics become So Wilks’ theorem formally states the well-known property of the minimized chi-squared from an LS fit. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Likelihood ratio for Poisson data Suppose the data are a set of values n = (n1,..., nΝ), e.g., the numbers of events in a histogram with N bins. Assume ni ~ Poisson(νi), i = 1,..., N, all independent. Goal is to estimate ν = (ν1,..., νΝ). Likelihood: Log-likelihood: ML estimators: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit with Poisson data The likelihood ratio statistic (all parameters fixed in numerator): Wilks’ theorem: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit with Poisson data (2) Or with M fitted parameters in numerator: Wilks’ theorem: Use tμ, qμ to quantify goodness of fit (p-value). Sampling distribution from Wilks’ theorem (chi-square). Exact in large sample limit; in practice good approximation for surprisingly small ni (~several). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit with multinomial data Similar if data n = (n1,..., nΝ) follow multinomial distribution: E.g. histogram with N bins but fix: Log-likelihood: ML estimators: (Only N-1 independent; one is ntot minus sum of rest.) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness of fit with multinomial data (2) The likelihood ratio statistics become: One less degree of freedom than in Poisson case because effectively only N-1 parameters fitted in denominator. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Estimators and g.o.f. all at once Evaluate numerators with θ (not its estimator): (Poisson) (Multinomial) These are equal to the corresponding -2 ln L(θ), so minimizing them gives the usual ML estimators for θ. The minimized value gives the statistic qμ, so we get goodness-of-fit for free. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Using LS to combine measurements G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Combining correlated measurements with LS G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example: averaging two correlated measurements G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Negative weights in LS average G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example: fitting a straight line Data: Model: yi independent and all follow yi ~ Gauss(μ(xi ), σi ) assume xi and σi known. Goal: estimate θ0 Here suppose we don’t care about θ1 (example of a “nuisance parameter”) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Maximum likelihood fit with Gaussian data In this example, the yi are assumed independent, so the likelihood function is a product of Gaussians: Maximizing the likelihood is here equivalent to minimizing i.e., for Gaussian data, ML same as Method of Least Squares (LS) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

θ1 known a priori For Gaussian yi, ML same as LS Minimize χ2 → estimator Come up one unit from to find G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML (or LS) fit of θ0 and θ1 Standard deviations from tangent lines to contour Correlation between causes errors to increase. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

If we have a measurement t1 ~ Gauss (θ1, σt1) The information on θ1 improves accuracy of G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The Bayesian approach In Bayesian statistics we can associate a probability with a hypothesis, e.g., a parameter value θ. Interpret probability of θ as ‘degree of belief’ (subjective). Need to start with ‘prior pdf’ π(θ), this reflects degree of belief about θ before doing the experiment. Our experiment has data x, → likelihood function L(x|θ). Bayes’ theorem tells how our beliefs should be updated in light of the data x: Posterior pdf p(θ | x) contains all our knowledge about θ. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Bayesian method We need to associate prior probabilities with θ0 and θ1, e.g., ‘non-informative’, in any case much broader than ← based on previous measurement Putting this into Bayes’ theorem gives: posterior ∝ likelihood ✕ prior G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Bayesian method (continued) We then integrate (marginalize) p(θ0, θ1 | x) to find p(θ0 | x): In this example we can do the integral (rare). We find Usually need numerical methods (e.g. Markov Chain Monte Carlo) to do integral. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Digression: marginalization with MCMC Bayesian computations involve integrals like often high dimensionality and impossible in closed form, also impossible with ‘normal’ acceptance-rejection Monte Carlo. Markov Chain Monte Carlo (MCMC) has revolutionized Bayesian computation. MCMC (e.g., Metropolis-Hastings algorithm) generates correlated sequence of random numbers: cannot use for many applications, e.g., detector MC; effective stat. error greater than if all values independent . Basic idea: sample multidimensional look, e.g., only at distribution of parameters of interest. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

MCMC basics: Metropolis-Hastings algorithm Goal: given an n-dimensional pdf generate a sequence of points Proposal density e.g. Gaussian centred about 1) Start at some point 2) Generate 3) Form Hastings test ratio 4) Generate 5) If move to proposed point else old point repeated 6) Iterate G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Metropolis-Hastings (continued) This rule produces a correlated sequence of points (note how each new point depends on the previous one). For our purposes this correlation is not fatal, but statistical errors larger than if points were independent. The proposal density can be (almost) anything, but choose so as to minimize autocorrelation. Often take proposal density symmetric: Test ratio is (Metropolis-Hastings): I.e. if the proposed step is to a point of higher , take it; if not, only take the step with probability If proposed step rejected, hop in place. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example: posterior pdf from MCMC Sample the posterior pdf from previous example with MCMC: Summarize pdf of parameter of interest with, e.g., mean, median, standard deviation, etc. Although numerical values of answer here same as in frequentist case, interpretation is different (sometimes unimportant?) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Bayesian method with alternative priors Suppose we don’t have a previous measurement of θ1 but rather, e.g., a theorist says it should be positive and not too much greater than 0.1 "or so", i.e., something like From this we obtain (numerically) the posterior pdf for θ0: This summarizes all knowledge about θ0. Look also at result from variety of priors. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

A more general fit (symbolic) Given measurements: and (usually) covariances: Predicted value: expectation value control variable parameters bias Often take: Minimize Equivalent to maximizing L(θ) ~ e-χ2/2, i.e., least squares same as maximum likelihood using a Gaussian likelihood function. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Its Bayesian equivalent Take Joint probability for all parameters and use Bayes’ theorem: To get desired probability for θ, integrate (marginalize) over b: → Posterior is Gaussian with mode same as least squares estimator, σθ same as from χ2 = χ2min + 1. (Back where we started!) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The error on the error Some systematic errors are well determined Error from finite Monte Carlo sample Some are less obvious Do analysis in n ‘equally valid’ ways and extract systematic error from ‘spread’ in results. Some are educated guesses Guess possible size of missing terms in perturbation series; vary renormalization scale Can we incorporate the ‘error on the error’? (cf. G. D’Agostini 1999; Dose & von der Linden 1999) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Motivating a non-Gaussian prior πb(b) Suppose now the experiment is characterized by where si is an (unreported) factor by which the systematic error is over/under-estimated. Assume correct error for a Gaussian πb(b) would be siσisys, so Width of σs(si) reflects ‘error on the error’. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error-on-error function πs(s) A simple unimodal probability density for 0 < s < 1 with adjustable mean and variance is the Gamma distribution: mean = b/a variance = b/a2 Want e.g. expectation value of 1 and adjustable standard Deviation σs , i.e., s In fact if we took πs (s) ~ inverse Gamma, we could integrate πb(b) in closed form (cf. D’Agostini, Dose, von Linden). But Gamma seems more natural & numerical treatment not too painful. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Prior for bias πb(b) now has longer tails Gaussian (σs = 0) P(|b| > 4σsys) = 6.3 × 10-5 σs = 0.5 P(|b| > 4σsys) = 0.65% G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

A simple test Suppose a fit effectively averages four measurements. Take σsys = σstat = 0.1, uncorrelated. Case #1: data appear compatible Posterior p(μ|y): measurement p(μ|y) experiment μ Usually summarize posterior p(μ|y) with mode and standard deviation: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Simple test with inconsistent data Case #2: there is an outlier Posterior p(μ|y): measurement p(μ|y) experiment μ → Bayesian fit less sensitive to outlier. (See also D'Agostini 1999; Dose & von der Linden 1999) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Goodness-of-fit vs. size of error In LS fit, value of minimized χ2 does not affect size of error on fitted parameter. In Bayesian analysis with non-Gaussian prior for systematics, a high χ2 corresponds to a larger error (and vice versa). post- erior 2000 repetitions of experiment, σs = 0.5, here no actual bias. σμ from least squares χ2 G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Formulation of the unfolding problem Consider a random variable y, goal is to determine pdf f(y). If parameterization f(y;θ) known, find e.g. ML estimators . If no parameterization available, construct histogram: “true” histogram New goal: construct estimators for the μj (or pj). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Migration Effect of measurement errors: y = true value, x = observed value, migration of entries between bins, f(y) is ‘smeared out’, peaks broadened. discretize: data are response matrix Note μ, ν are constants; n subject to statistical fluctuations. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Efficiency, background Sometimes an event goes undetected: efficiency Sometimes an observed event is due to a background process: βi = expected number of background events in observed histogram. For now, assume the βi are known. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

The basic ingredients “true” “observed” G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Summary of ingredients ‘true’ histogram: probabilities: expectation values for observed histogram: observed histogram: response matrix: efficiencies: expected background: These are related by: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Maximum likelihood (ML) estimator from inverting the response matrix Assume can be inverted: Suppose data are independent Poisson: So the log-likelihood is ML estimator is G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example with ML solution Catastrophic failure??? G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

What went wrong? Suppose μ really had a lot of fine structure. Applying R washes this out, but leaves a residual structure: But we don’t have ν, only n. R-1 “thinks” fluctuations in n are the residual of original fine structure, puts this back into G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML solution revisited For Poisson data the ML estimators are unbiased: Their covariance is: (Recall these statistical errors were huge for the example shown.) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML solution revisited (2) The information inequality gives for unbiased estimators the minimum (co)variance bound: invert → This is the same as the actual variance! I.e. ML solution gives smallest variance among all unbiased estimators, even though this variance was huge. In unfolding one must accept some bias in exchange for a (hopefully large) reduction in variance. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Correction factor method Often Ci ~ O(1) so statistical errors far smaller than for ML. Nonzero bias unless MC = Nature. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Reality check on the statistical errors Example from Bob Cousins Reality check on the statistical errors Suppose for some bin i we have: (10% stat. error) But according to the estimate, only 10 of the 100 events found in the bin belong there; the rest spilled in from outside. How can we have a 10% measurement if it is based on only 10 events that really carry information about the desired parameter? G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Discussion of correction factor method As with all unfolding methods, we get a reduction in statistical error in exchange for a bias; here the bias is difficult to quantify (difficult also for many other unfolding methods). The bias should be small if the bin width is substantially larger than the resolution, so that there is not much bin migration. So if other uncertainties dominate in an analysis, correction factors may provide a quick and simple solution (a “first-look”). Still the method has important flaws and it would be best to avoid it. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Regularized unfolding G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Regularized unfolding (2) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Tikhonov regularization Solution using Singular Value Decomposition (SVD). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

SVD implementation of Tikhonov unfolding Hoecker, V. Kartvelishvili, NIM A372 (1996) 469; (TSVDUnfold in ROOT). Minimizes Numerical implementation using Singular Value Decomposition. Recommendations for setting regularization parameter τ: Transform variables so errors ~ Gauss(0,1); number of transformed values significantly different from zero gives prescription for τ; or base choice of τ on unfolding of test distributions. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

SVD example G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Regularization function based on entropy Can have Bayesian motivation: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of entropy-based unfolding G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Estimating bias and variance G. Cowan, Statistical Data Analysis, OUP (1998) Ch. 11 Estimating bias and variance G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Choosing the regularization parameter G. Cowan, Statistical Data Analysis, OUP (1998) Ch. 11 Choosing the regularization parameter G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Choosing the regularization parameter (2) G. Cowan, Statistical Data Analysis, OUP (1998) Ch. 11 Choosing the regularization parameter (2) G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Some examples with Tikhonov regularization G. Cowan, Statistical Data Analysis, OUP (1998) Ch. 11 Some examples with Tikhonov regularization G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Some examples with entropy regularization G. Cowan, Statistical Data Analysis, OUP (1998) Ch. 11 Some examples with entropy regularization G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Stat. and sys. errors of unfolded solution In general the statistical covariance matrix of the unfolded estimators is not diagonal; need to report full But unfolding necessarily introduces biases as well, corresponding to a systematic uncertainty (also correlated between bins). This is more difficult to estimate. Suppose, nevertheless, we manage to report both Ustat and Usys. To test a new theory depending on parameters θ, use e.g. Mixes frequentist and Bayesian elements; interpretation of result can be problematic, especially if Usys itself has large uncertainty. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Folding Suppose a theory predicts f(y) → μ (may depend on parameters θ). Given the response matrix R and expected background β, this predicts the expected numbers of observed events: From this we can get the likelihood, e.g., for Poisson data, And using this we can fit parameters and/or test, e.g., using the likelihood ratio statistic G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Versus unfolding If we have an unfolded spectrum and full statistical and systematic covariance matrices, to compare this to a model μ compute likelihood where Complications because one needs estimate of systematic bias Usys. If we find a gain in sensitivity from the test using the unfolded distribution, e.g., through a decrease in statistical errors, then we are exploiting information inserted via the regularization (e.g., imposed smoothness). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

ML solution again From the standpoint of testing a theory or estimating its parameters, the ML solution, despite catastrophically large errors, is equivalent to using the uncorrected data (same information content). There is no bias (at least from unfolding), so use The estimators of θ should have close to optimal properties: zero bias, minimum variance. The corresponding estimators from any unfolded solution cannot in general match this. Crucial point is to use full covariance, not just diagonal errors. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Summary/discussion Unfolding can be a minefield and is not necessary if goal is to compare measured distribution with a model prediction. Even comparison of uncorrected distribution with future theories not a problem, as long as it is reported together with the expected background and response matrix. In practice complications because these ingredients have uncertainties, and they must be reported as well. Unfolding useful for getting an actual estimate of the distribution we think we’ve measured; can e.g. compare ATLAS/CMS. Model test using unfolded distribution should take account of the (correlated) bias introduced by the unfolding procedure. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Finally... Estimation of parameters is usually the “easy” part of statistics: Frequentist: maximize the likelihood. Bayesian: find posterior pdf and summarize (e.g. mode). Standard tools for quantifying precision of estimates: Variance of estimators, confidence intervals,... But there are many potential stumbling blocks: bias versus variance trade-off (how many parameters to fit?); goodness of fit (usually only for LS or binned data); choice of prior for Bayesian approach; unexpected behaviour in LS averages with correlations,... We can practice this later with MINUIT. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Extra Slides G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation Suppose we measure a set of values and we have the covariances which quantify the measurement errors in the xi. Now consider a function What is the variance of The hard way: use joint pdf to find the pdf then from g(y) find V[y] = E[y2] - (E[y])2. Often not practical, may not even be fully known. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation (2) Suppose we had in practice only estimates given by the measured Expand to 1st order in a Taylor series about To find V[y] we need E[y2] and E[y]. since G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation (3) Putting the ingredients together gives the variance of G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation (4) If the xi are uncorrelated, i.e., then this becomes Similar for a set of m functions or in matrix notation where G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation (5) y(x) The ‘error propagation’ formulae tell us the covariances of a set of functions in terms of the covariances of the original variables. σy x σx Limitations: exact only if linear. y(x) Approximation breaks down if function nonlinear over a region comparable in size to the σi. ? x σx N.B. We have said nothing about the exact pdf of the xi, e.g., it doesn’t have to be Gaussian. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation − special cases → → That is, if the xi are uncorrelated: add errors quadratically for the sum (or difference), add relative errors quadratically for product (or ratio). But correlations can change this completely... G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Error propagation − special cases (2) Consider with Now suppose ρ = 1. Then i.e. for 100% correlation, error in difference → 0. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Approximate confidence intervals/regions from the likelihood function Suppose we test parameter value(s) θ = (θ1, ..., θn) using the ratio Lower λ(θ) means worse agreement between data and hypothesized θ. Equivalently, usually define so higher tθ means worse agreement between θ and the data. p-value of θ therefore need pdf G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Confidence region from Wilks’ theorem Wilks’ theorem says (in large-sample limit and providing certain conditions hold...) chi-square dist. with # d.o.f. = # of components in θ = (θ1, ..., θn). Assuming this holds, the p-value is To find boundary of confidence region set pθ = α and solve for tθ: Recall also G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Confidence region from Wilks’ theorem (cont.) i.e., boundary of confidence region in θ space is where For example, for 1 – α = 68.3% and n = 1 parameter, and so the 68.3% confidence level interval is determined by Same as recipe for finding the estimator’s standard deviation, i.e., is a 68.3% CL confidence interval. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of interval from ln L(θ) For n=1 parameter, CL = 0.683, Qα = 1. Our exponential example, now with only n = 5 events. Can report ML estimate with approx. confidence interval from ln Lmax – 1/2 as “asymmetric error bar”: G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Multiparameter case For increasing number of parameters, CL = 1 – α decreases for confidence region determined by a given G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Multiparameter case (cont.) Equivalently, Qα increases with n for a given CL = 1 – α. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Ingredients for a test / interval Note that these confidence intervals can be found using only the likelihood function evaluated with the observed data. This is because the statistic approaches a well-defined distribution independent of the distribution of the data in the large sample limit. For finite samples, however, the resulting intervals are approximate. In general to carry out a test we need to know the distribution of the test statistic t(x), and this means we need the full model P(x|θ). G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Covariance, correlation, etc. For a pair of random variables x and y, the covariance and correlation are One only talks about the correlation of two quantities to which one assigns probability (i.e., random variables). So in frequentist statistics, estimators for parameters can be correlated, but not the parameters themselves. In Bayesian statistics it does make sense to say that two parameters are correlated, e.g., G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Example of “correlated systematics” Suppose we carry out two independent measurements of the length of an object using two rulers with diferent thermal expansion properties. Suppose the temperature is not known exactly but must be measured (but lengths measured together so T same for both), The expectation value of the measured length Li (i = 1, 2) is related to true length λ at a reference temperature τ0 by and the (uncorrected) length measurements are modeled as G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two rulers (2) The model thus treats the measurements T, L1, L2 as uncorrelated with standard deviations σT, σ1, σ2, respectively: Alternatively we could correct each raw measurement: which introduces a correlation between y1, y2 and T But the likelihood function (multivariate Gauss in T, y1, y2) is the same function of τ and λ as before. Language of y1, y2: temperature gives correlated systematic. Language of L1, L2: temperature gives “coherent” systematic. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two rulers (3) Outcome has some surprises: Estimate of λ does not lie between y1 and y2. Stat. error on new estimate of temperature substantially smaller than initial σT. These are features, not bugs, that result from our model assumptions. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two rulers (4) We may re-examine the assumptions of our model and conclude that, say, the parameters α1, α2 and τ0 were also uncertain. We may treat their nominal values as measurements (need a model; Gaussian?) and regard α1, α2 and τ0 as as nuisance parameters. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017

Two rulers (5) The outcome changes; some surprises may be “reduced”. G. Cowan Terascale School of Statistics, DESY, 6-7 March 2017