Likelihoods 1) Introduction . 2) Do’s & Dont’s

Slides:



Advertisements
Similar presentations
DO’S AND DONT’S WITH LIKELIHOODS Louis Lyons Oxford (CDF)
Advertisements

Probability and Statistics Basic concepts II (from a physicist point of view) Benoit CLEMENT – Université J. Fourier / LPSC
1 LIMITS Why limits? Methods for upper limits Desirable properties Dealing with systematics Feldman-Cousins Recommendations.
1 χ 2 and Goodness of Fit Louis Lyons IC and Oxford SLAC Lecture 2’, Sept 2008.
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford and IC CDF IC, February 2008.
1 Practical Statistics for Physicists Sardinia Lectures Oct 2008 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF Mexico, November 2006.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Slide 1 Statistics for HEP Roger Barlow Manchester University Lecture 3: Estimation.
1 Practical Statistics for Physicists Dresden March 2010 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF CERN, October 2006.
Statistical Image Modelling and Particle Physics Comments on talk by D.M. Titterington Glen Cowan RHUL Physics PHYSTAT05 Glen Cowan Royal Holloway, University.
1 Practical Statistics for Physicists SLUO Lectures February 2007 Louis Lyons Oxford
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF Manchester, 16 th November 2005.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
G. Cowan 2011 CERN Summer Student Lectures on Statistics / Lecture 41 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Rao-Cramer-Frechet (RCF) bound of minimum variance (w/o proof) Variance of an estimator of single parameter is limited as: is called “efficient” when the.
1 Statistical Inference Problems in High Energy Physics and Astronomy Louis Lyons Particle Physics, Oxford BIRS Workshop Banff.
1 Practical Statistics for Physicists Stockholm Lectures Sept 2008 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
Lecture II-2: Probability Review
TOPLHCWG. Introduction The ATLAS+CMS combination of single-top production cross-section measurements in the t channel was performed using the BLUE (Best.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
880.P20 Winter 2006 Richard Kass 1 Maximum Likelihood Method (MLM) Does this procedure make sense? The MLM answers this question and provides a method.
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #23.
1 Practical Statistics for Physicists CERN Latin American School March 2015 Louis Lyons Imperial College and Oxford CMS expt at LHC
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Statistical Data Analysis: Lecture 5 1Probability, Bayes’ theorem 2Random variables and.
1 χ 2 and Goodness of Fit & L ikelihood for Parameters Louis Lyons Imperial College and Oxford CERN Summer Students July 2014.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
1 Practical Statistics for Physicists CERN Summer Students July 2014 Louis Lyons Imperial College and Oxford CMS expt at LHC
Confidence Limits and Intervals 3: Various other topics Roger Barlow SLUO Lectures on Statistics August 2006.
Louis Lyons Imperial College
BAYES and FREQUENTISM: The Return of an Old Controversy
Chapter 3: Maximum-Likelihood Parameter Estimation
χ2 and Goodness of Fit & Likelihood for Parameters
Probability and Statistics for Particle Physics
Practical Statistics for Physicists
Reduction of Variables in Parameter Inference
BAYES and FREQUENTISM: The Return of an Old Controversy
Likelihoods 1) Introduction 2) Do’s & Dont’s
p-values and Discovery
The Maximum Likelihood Method
Multi-dimensional likelihood
Lecture 4 1 Probability (90 min.)
Introduction to Instrumentation Engineering
Modelling data and curve fitting
Graduierten-Kolleg RWTH Aachen February 2014 Glen Cowan
Chi Square Distribution (c2) and Least Squares Fitting
Computing and Statistical Data Analysis / Stat 8
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 3 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for Particle Physics Lecture 3: Parameter Estimation
Computing and Statistical Data Analysis / Stat 7
Introduction to Unfolding
Search for Narrow Resonance Decaying to Muon Pairs in 2.3 fb-1
Introduction to Statistics − Day 4
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for HEP Roger Barlow Manchester University
Computing and Statistical Data Analysis / Stat 10
Presentation transcript:

Likelihoods 1) Introduction . 2) Do’s & Dont’s Louis Lyons and Lorenzo Moneta Imperial College & Oxford CERN CERN Academic Training Course Nov 2016

Topics (mainly Parameter Determination) What it is How it works: Resonance Uncertainty estimates Detailed example: Lifetime Several Parameters Extended maximum L Do’s and Dont’s with L

y = N (1 +  cos2) Simple example: Angular distribution Start with pdf = Prob density fn for data, given param values: y = N (1 +  cos2) yi = N (1 +  cos2i) = probability density of observing i, given  L() =  yi = probability density of observing the data set yi, given  Best estimate of  is that which maximises L Values of  for which L is very small are ruled out Precision of estimate for  comes from width of L distribution CRUCIAL to normalise y N = 1/{2(1 + /3)} (Information about parameter  comes from shape of exptl distribution of cos)  = -1  large L cos  cos  

How it works: Resonance First write down pdf: y ~ Γ/2 (m-M0)2 + (Γ/2)2 m m Vary M0 Vary Γ N.B. Can make use of individual events

Conventional to consider l = lnL = Σ lnyi Better numerically, and has some nice properties L

Maximum likelihood uncertainty Range of likely values of param μ from width of L or l dists. If L(μ) is Gaussian, following definitions of σ are equivalent: 1) RMS of L(µ) 2) 1/√(-d2lnL / dµ2) (Mnemonic) 3) ln(L(μ0±σ) = ln(L(μ0)) -1/2 If L(μ) is non-Gaussian, these are no longer the same “Procedure 3) above still gives interval that contains the true value of parameter μ with 68% probability” Return to ‘Coverage’ later Uncertainties from 3) usually asymmetric, and asym uncertainties are messy. So choose param sensibly e.g 1/p rather than p; τ or λ

Lifetime Determination Realistic analyses are more complicated than this τ

Several Parameters PROFILE L Lprof =L(β,νbest(β)), where β = param of interest ν = nuisance param(s) Uncertainty on β from decrease in ln(Lprof) by 0.5

Profile L υ A method for dealing with systematics Stat uncertainty on s from width of L fixed at υbest Total uncertainty on s from width of L(s,υprof(s)) = Lprof υprof(s) is best value of υ at that s υprof(s) as fn of s lies on green line Total uncert ≥ stat uncertainty s Contours of lnL(s,υ) s = physics param υ = nuisance param

Blue curves = different values of ν -2lnL ∆ s

Extended Maximum Likelihood Maximum Likelihood uses shape  parameters Extended Maximum Likelihood uses shape and normalisation i.e. EML uses prob of observing: a) sample of N events; and b) given data distribution in x,……  shape parameters and normalisation. Example 1: Angular distribution Observe N events total e.g 100 F forward 96 B backward 4 Rate estimates ML EML Total ----- 10010 Forward 962 9610 Backward 42 4 2

ML and EML ML uses fixed (data) normalisation EML has normalisation as parameter Example 2: Cosmic ray experiment See 96 protons and 4 heavy nuclei ML estimate 96 ± 2% protons 4 ±2% heavy nuclei EML estimate 96 ± 10 protons 4 ± 2 heavy nuclei Example 3: Decay of resonance Use ML for Branching Ratios Use EML for Partial Decay Rates

Relation between Poisson and Binomial N people in lecture, m males and f females (N = m + f ) Assume these are representative of basic rates: ν people νp males ν(1-p) females Probability of observing N people = PPoisson = e–ν ν N /N! Prob of given male/female division = PBinom = pm (1-p)f Prob of N people, m male and f female = PPoisson PBinom = e–νp νm pm * e-ν(1-p) νf (1-p)f m! f ! = Poisson prob for males * Poisson prob for females People Male Female Patients Cured Remain ill Decaying nuclei Forwards Backwards Cosmic rays Protons Other particles

Moments Max Like Least squares Easy? Yes, if… Normalisation, maximisation messy Minimisation Efficient? Not very Usually best Sometimes = Max Like Input Separate events Histogram Goodness of fit Messy No (unbinned) Easy Constraints No Yes N dimensions Easy if …. Norm, max messier Weighted events Errors difficult Bgd subtraction Troublesome Inverse Covariance Matrix Observed spread, or analytic - ∂2l ∂pi∂pj ∂2S 2∂pi∂pj Main feature Best Goodness of Fit

DO’S AND DONT’S WITH L NORMALISATION FOR LIKELIHOOD JUST QUOTE UPPER LIMIT (ln L) = 0.5 RULE Lmax AND GOODNESS OF FIT BAYESIAN SMEARING OF L USE CORRECT L (PUNZI EFFECT)

ΔlnL = -1/2 rule 3) ln(L(μ0±σ) = ln(L(μ0)) -1/2 If L(μ) is Gaussian, following definitions of σ are equivalent: 1) RMS of L(µ) 2) 1/√(-d2lnL/dµ2) 3) ln(L(μ0±σ) = ln(L(μ0)) -1/2 If L(μ) is non-Gaussian, these are no longer the same “Procedure 3) above still gives interval that contains the true value of parameter μ with 68% probability” Heinrich: CDF note 6438 (see CDF Statistics Committee Web-page) Barlow: Phystat05

How often does quoted range for COVERAGE How often does quoted range for parameter include param’s true value? µtrue µ N.B. Coverage is a property of METHOD, not of a particular exptl result Coverage can vary with μ Study coverage of different methods of Poisson parameter μ, from observation of number of events n Hope for: 100% Nominal value

If true for all : “correct coverage” P< for some “undercoverage” (this is serious !) P> for some “overcoverage” Conservative Loss of rejection power Some Bayesians regard Coverage as irrelevant

Coverage : L approach (Not Neyman construction) P(n,μ) = e-μμn/n! (Joel Heinrich CDF note 6438) -2 lnλ< 1 λ = P(n,μ)/P(n,μbest) UNDERCOVERS

Neyman central intervals, NEVER undercover (Conservative at both ends)

Feldman-Cousins Unified intervals Neyman construction so NEVER undercovers

Unbinned Lmax and Goodness of Fit? Find params by maximising L So larger L better than smaller L So Lmax gives Goodness of Fit?? Bad Good? Great? Monte Carlo distribution of unbinned Lmax Frequency Lmax

Contrast pdf(data, params) param vary fixed Not necessarily: pdf L(data, params) fixed vary L Contrast pdf(data, params) param vary fixed e.g. p(λ) = λ exp(-λt) data Max at t = 0 Max at λ=1/t p L t λ

Example 1 Fit exponential to times t1, t2 ,t3 ……. [ Joel Heinrich, CDF 5639 ] L = π λ exp(-λti) lnLmax = -N(1 + ln tav) i.e. Depends only on AVERAGE t, but is INDEPENDENT OF DISTRIBUTION OF t (except for……..) (Average t is a sufficient statistic) Variation of Lmax in Monte Carlo is due to variations in samples’ average t , but NOT TO BETTER OR WORSE FIT pdf Same average t same Lmax t

L = Example 2 cos θ pdf (and likelihood) depends only on cos2θi Insensitive to sign of cosθi So data can be in very bad agreement with expected distribution e.g. all data with cosθ < 0 and Lmax does not know about it. Example of general principle

Lmax and Goodness of Fit? Conclusion: L has sensible properties with respect to parameters NOT with respect to data Lmax within Monte Carlo peak is NECESSARY not SUFFICIENT (‘Necessary’ doesn’t mean that you have to do it!)

Binned data and Goodness of Fit using L-ratio pni(µi) pni(µi,best) pni(ni) ni L = μi Lbest x ln[L-ratio] = ln[L/Lbest] large μi -0.5c2 i.e. Goodness of Fit Lbest is independent of parameters of fit, and so same parameter values from L or L-ratio Baker and Cousins, NIM A221 (1984) 437

Conclusions How it works, and how to estimate uncertainties Likelihood or Extended Likelihood Several Parameters Likelihood does not guarantee coverage Unbinned Lmax and Goodness of Fit

Getting L wrong: Punzi effect Giovanni Punzi @ PHYSTAT2003 “Comments on L fits with variable resolution” Separate two close signals, when resolution σ varies event by event, and is different for 2 signals e.g. 1) Signal 1 1+cos2θ Signal 2 Isotropic and different parts of detector give different σ 2) M (or τ) Different numbers of tracks  different σM (or στ)

Events characterised by xi and σi A events centred on x = 0 B events centred on x = 1 L(f)wrong = Π [f * G(xi,0,σi) + (1-f) * G(xi,1,σi)] L(f)right = Π [f*p(xi,σi;A) + (1-f) * p(xi,σi;B)] p(S,T) = p(S|T) * p(T) p(xi,σi|A) = p(xi|σi,A) * p(σi|A) = G(xi,0,σi) * p(σi|A) So L(f)right = Π[f * G(xi,0,σi) * p(σi|A) + (1-f) * G(xi,1,σi) * p(σi|B)] If p(σ|A) = p(σ|B), Lright = Lwrong but NOT otherwise

Punzi’s Monte Carlo for A : G(x,0,A) B : G(x,1,B) fA = 1/3 Lwrong Lright A B fA f fA f 1.0 1.0 0.336(3) 0.08 Same 1.0 1.1 0.374(4) 0.08 0. 333(0) 0 1.0 2.0 0.645(6) 0.12 0.333(0) 0 1  2 1.5 3 0.514(7) 0.14 0.335(2) 0.03 1.0 1  2 0.482(9) 0.09 0.333(0) 0 1) Lwrong OK for p(A) = p(B) , but otherwise BIASSED 2) Lright unbiassed, but Lwrong biassed (enormously)! 3) Lright gives smaller σf than Lwrong

Explanation of Punzi bias σA = 1 σB = 2 A events with σ = 1 B events with σ = 2 x  x  ACTUAL DISTRIBUTION FITTING FUNCTION [NA/NB variable, but same for A and B events] Fit gives upward bias for NA/NB because (i) that is much better for A events; and (ii) it does not hurt too much for B events

Another scenario for Punzi problem: PID A B π K M TOF Originally: Positions of peaks = constant K-peak  π-peak at large momentum σi variable, (σi)A = (σi)B σi ~ constant, pK = pπ COMMON FEATURE: Separation/Error = Constant Where else?? MORAL: Beware of event-by-event variables whose pdf’s do not appear in L

Avoiding Punzi Bias BASIC RULE: Write pdf for ALL observables, in terms of parameters Include p(σ|A) and p(σ|B) in fit (But then, for example, particle identification may be determined more by momentum distribution than by PID) OR Fit each range of σi separately, and add (NA)i  (NA)total, and similarly for B Incorrect method using Lwrong uses weighted average of (fA)j, assumed to be independent of j Talk by Catastini at PHYSTAT05