1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF Manchester, 16 th November 2005.

Slides:



Advertisements
Similar presentations
DO’S AND DONT’S WITH LIKELIHOODS Louis Lyons Oxford (CDF)
Advertisements

27 th March CERN Higgs searches: CL s W. J. Murray RAL.
1 p-values and Discovery Louis Lyons Oxford SLUO Lecture 4, February 2007.
BAYES and FREQUENTISM: The Return of an Old Controversy 1 Louis Lyons Imperial College and Oxford University CERN Latin American School March 2015.
The Bayesian Effects in measurement of Asymmetry of Poisson Traffic Flows S.Bityukov (IHEP,Protvino),N.Krasnikov (INR RAS,Moscow), A.Kuznetsov(NEU,Boston),V.Smirnova.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
1 LIMITS Why limits? Methods for upper limits Desirable properties Dealing with systematics Feldman-Cousins Recommendations.
BAYES and FREQUENTISM: The Return of an Old Controversy 1 Louis Lyons Imperial College and Oxford University Heidelberg April 2013.
An Answer and a Question Limits: Combining 2 results Significance: Does  2 give  2 ? Roger Barlow BIRS meeting July 2006.
1 χ 2 and Goodness of Fit Louis Lyons IC and Oxford SLAC Lecture 2’, Sept 2008.
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford and IC CDF IC, February 2008.
1 Practical Statistics for Physicists Sardinia Lectures Oct 2008 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
Statistics In HEP 2 Helge VossHadron Collider Physics Summer School June 8-17, 2011― Statistics in HEP 1 How do we understand/interpret our measurements.
1 Perspective of SCMA IV from Particle Physics Louis Lyons Particle Physics, Oxford (CDF experiment, Fermilab) SCMA IV Penn State.
8. Statistical tests 8.1 Hypotheses K. Desch – Statistical methods of data analysis SS10 Frequent problem: Decision making based on statistical information.
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF Mexico, November 2006.
1 Is there evidence for a peak in this data?. 2 “Observation of an Exotic S=+1 Baryon in Exclusive Photoproduction from the Deuteron” S. Stepanyan et.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Slide 1 Statistics for HEP Roger Barlow Manchester University Lecture 3: Estimation.
1 Practical Statistics for Physicists Dresden March 2010 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF CERN, October 2006.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
1 BAYES and FREQUENTISM: The Return of an Old Controversy Louis Lyons Imperial College and Oxford University Vienna May 2011.
G. Cowan 2011 CERN Summer Student Lectures on Statistics / Lecture 41 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 7 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
1 Practical Statistics for Physicists Stockholm Lectures Sept 2008 Louis Lyons Imperial College and Oxford CDF experiment at FNAL CMS expt at LHC
Lecture II-2: Probability Review
Chapter 5 Sampling and Statistics Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
Probability theory 2 Tron Anders Moger September 13th 2006.
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
Random Sampling, Point Estimation and Maximum Likelihood.
Lab 3b: Distribution of the mean
1 Is there evidence for a peak in this data?. 2 “Observation of an Exotic S=+1 Baryon in Exclusive Photoproduction from the Deuteron” S. Stepanyan et.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
1 Practical Statistics for Physicists CERN Latin American School March 2015 Louis Lyons Imperial College and Oxford CMS expt at LHC
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
Sampling and estimation Petter Mostad
BAYES and FREQUENTISM: The Return of an Old Controversy 1 Louis Lyons Imperial College and Oxford University CERN Summer Students July 2014.
Experience from Searches at the Tevatron Harrison B. Prosper Florida State University 18 January, 2011 PHYSTAT 2011 CERN.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
1 χ 2 and Goodness of Fit & L ikelihood for Parameters Louis Lyons Imperial College and Oxford CERN Summer Students July 2014.
In Bayesian theory, a test statistics can be defined by taking the ratio of the Bayes factors for the two hypotheses: The ratio measures the probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
1 Practical Statistics for Physicists CERN Summer Students July 2014 Louis Lyons Imperial College and Oxford CMS expt at LHC
Ivo van Vulpen Why does RooFit put asymmetric errors on data points ? 10 slides on a ‘too easy’ topic that I hope confuse, but do not irritate you
W.Murray PPD 1 Summary of conference Emphasis on best-practice Bill Murray Thanks to Barlow, Cowan and Lyons.
Frequentist approach Bayesian approach Statistics I.
BAYES and FREQUENTISM: The Return of an Old Controversy
The asymmetric uncertainties on data points in Roofit
χ2 and Goodness of Fit & Likelihood for Parameters
BAYES and FREQUENTISM: The Return of an Old Controversy
Likelihoods 1) Introduction 2) Do’s & Dont’s
Likelihoods 1) Introduction . 2) Do’s & Dont’s
Lecture 4 1 Probability (90 min.)
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Introduction to Statistics − Day 4
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for HEP Roger Barlow Manchester University
Presentation transcript:

1 Do’s and Dont’s with L ikelihoods Louis Lyons Oxford CDF Manchester, 16 th November 2005

2

3 NORMALISATION FOR LIKELIHOOD JUST QUOTE UPPER LIMIT  (ln L ) = 0.5 RULE L max AND GOODNESS OF FIT BAYESIAN SMEARING OF L USE CORRECT L (PUNZI EFFECT) DO’S AND DONT’S WITH L

4 NORMALISATION FOR LIKELIHOOD data param e.g. Lifetime fit to t 1, t 2,………..t n t MUST be independent of  /1 Missing / )|(       t etP INCORRECT

5 2) QUOTING UPPER LIMIT “We observed no significant signal, and our 90% conf upper limit is …..” Need to specify method e.g. L Chi-squared (data or theory error) Frequentist (Central or upper limit) Feldman-Cousins Bayes with prior = const, “Show your L ” 1) Not always practical 2) Not sufficient for frequentist methods

6 90% C.L. Upper Limits x  x0x0

7 Δln L = -1/2 rule If L (μ) is Gaussian, following definitions of σ are equivalent: 1) RMS of L (µ) 2) 1/√(-d 2 L /dµ 2 ) 3) ln( L (μ±σ) = ln( L (μ 0 )) -1/2 If L (μ) is non-Gaussian, these are no longer the same “Procedure 3) above still gives interval that contains the true value of parameter μ with 68% probability” Heinrich: CDF note 6438 (see CDF Statistics Committee Web-page) Barlow: Phystat05

8 COVERAGE How often does quoted range for parameter include param’s true value? N.B. Coverage is a property of METHOD, not of a particular exptl result Coverage can vary with Study coverage of different methods of Poisson parameter, from observation of number of events n Hope for: Nominal value 100%

9 COVERAGE If true for all : “correct coverage” P< for some “undercoverage” (this is serious !) P> for some “overcoverage” Conservative Loss of rejection power

10 Coverage : L approach (Not frequentist) P(n, μ) = e -μ μ n /n! (Joel Heinrich CDF note 6438) -2 lnλ< 1 λ = P(n,μ)/P(n,μ best ) UNDERCOVERS

11 Frequentist central intervals, NEVER undercovers (Conservative at both ends)

12 Feldman-Cousins Unified intervals Frequentist, so NEVER undercovers

13 Probability ordering Frequentist, so NEVER undercovers

14  = (n- µ) 2 /µ Δ = % coverage? NOT frequentist : Coverage = 0%  100%

15 Great?Good?Bad L max Frequency L max and Goodness of Fit? Find params by maximising L So larger L better than smaller L So L max gives Goodness of Fit?? Monte Carlo distribution of unbinned L max

16 Not necessarily: pdf L (data,params) fixed vary L Contrast pdf(data,params) param vary fixed data e.g. Max at t = 0 Max at p L t λ

17 Example 1 Fit exponential to times t 1, t 2, t 3 ……. [Joel Heinrich, CDF 5639] L = ln L max = -N(1 + ln t av ) i.e. Depends only on AVERAGE t, but is INDEPENDENT OF DISTRIBUTION OF t (except for……..) (Average t is a sufficient statistic) Variation of L max in Monte Carlo is due to variations in samples’ average t, but NOT TO BETTER OR WORSE FIT pdf Same average t same L max t

18 Example 2 L = cos θ pdf (and likelihood) depends only on cos 2 θ i Insensitive to sign of cosθ i So data can be in very bad agreement with expected distribution e.g. all data with cosθ < 0 and L max does not know about it. Example of general principle

19 Example 3 Fit to Gaussian with variable μ, fixed σ ln L max = N(-0.5 ln 2π – ln σ) – 0.5 Σ(x i – x av ) 2 /σ 2 constant ~variance(x) i.e. L max depends only on variance(x), which is not relevant for fitting μ (μ est = x av ) Smaller than expected variance(x) results in larger L max x Worse fit, larger L max Better fit, lower L max

20 L max and Goodness of Fit? Conclusion: L has sensible properties with respect to parameters NOT with respect to data L max within Monte Carlo peak is NECESSARY not SUFFICIENT (‘Necessary’ doesn’t mean that you have to do it!)

21 Binned data and Goodness of Fit using L -ratio  n i L = μ i L best  x ln[ L -ratio] = ln[ L / L best ] large μi -0.5  2 i.e. Goodness of Fit μ best is independent of parameters of fit, and so same parameter values from L or L -ratio Baker and Cousins, NIM A221 (1984) 437

22 L and pdf Example 1: Poisson pdf = Probability density function for observing n, given μ P(n;μ) = e -μ μ n /n! From this, construct L as L (μ;n) = e -μ μ n /n! i.e. use same function of μ and n, but pdf for pdf, μ is fixed, but for L, n is fixed μ L n N.B. P(n;μ) exists only at integer non-negative n L (μ;n) exists only as continuous fn of non-negative μ

23  xample 2 Lifetime distribution pdf p(t;λ) = λ e -λt So L(λ;t) = λ e –λt (single observed t) Here both t and λ are continuous pdf maximises at t = 0 L maximises at λ = t . Functional  form of P(t) and L( λ) are different Fixed λ Fixed t p L t λ

24 Example 3: Gaussian N.B. In this case, same functional form for pdf and L So if you consider just Gaussians, can be confused between pdf and L So examples 1 and 2 are useful

25 Transformation properties of pdf and L Lifetime example: dn/dt = λ e –λt Change observable from t to y = √t So (a) pdf changes, BUT (b) i.e. corresponding integrals of pdf are INVARIANT

26 Now for L ikelihood When parameter changes from λ to τ = 1/λ (a’) L does not change dn/dt = 1/ τ exp{-t/τ} and so L ( τ;t) = L (λ=1/τ;t) because identical numbers occur in evaluations of the two L ’s BUT (b’) So it is NOT meaningful to integrate L (However,………)

27 CONCLUSION: NOT recognised statistical procedure [Metric dependent: τ range agrees with τ pred λ range inconsistent with 1/τ pred ] BUT 1) Could regard as “black box” 2) Make respectable by L Bayes’ posterior Posterior( λ) ~ L ( λ)* Prior( λ) [and Prior(λ) can be constant]

28 pdf(t;λ) L (λ;t) Value of functionChanges when observable is transformed INVARIANT wrt transformation of parameter Integral of function INVARIANT wrt transformation of observable Changes when param is transformed ConclusionMax prob density not very sensible Integrating L not very sensible

29

30 Getting L wrong: Punzi effect Giovanni PHYSTAT2003 “Comments on L fits with variable resolution” Separate two close signals, when resolution σ varies event by event, and is different for 2 signals e.g. 1) Signal 1 1+cos 2 θ Signal 2 Isotropic and different parts of detector give different σ 2) M (or τ) Different numbers of tracks  different σ M (or σ τ )

31 Events characterised by x i and  σ i A events centred on x = 0 B events centred on x = 1 L (f) wrong = Π [f * G(x i,0,σ i ) + (1-f) * G(x i,1,σ i )] L (f) right = Π [f*p(x i,σ i ;A) + (1-f) * p(x i,σ i ;B)] p(S,T) = p(S|T) * p(T) p(x i,σ i |A) = p(x i |σ i,A) * p(σ i |A) = G(x i,0,σ i ) * p(σ i |A) So L (f) right = Π[f * G(x i,0,σ i ) * p(σ i |A) + (1-f) * G(x i,1,σ i ) * p(σ i |B)] If p(σ|A) = p(σ|B), L right = L wrong but NOT otherwise

32 Giovanni’s Monte Carlo for A : G(x,0,   ) B : G(x,1   ) f A = 1/3 L wrong L right       f A  f  f A  f (3) Same (4) (0) 0   (6) (0)  0  1   (7) (2)   (9) (0) 0  1) L wrong OK for p    p   , but otherwise BIASSED  2) L right unbiassed, but L wrong biassed (enormously)!  3) L right gives smaller σf than L wrong

33 Explanation of Punzi bias σ A = 1 σ B = 2 A events with σ = 1 B events with σ = 2 x  x  ACTUAL DISTRIBUTION FITTING FUNCTION [N A /N B variable, but same for A and B events] Fit gives upward bias for N A /N B because (i) that is much better for A events; and (ii) it does not hurt too much for B events

34 Another scenario for Punzi problem: PID A B π K M TOF Originally: Positions of peaks = constant K-peak  π-peak at large momentum σ i variable, (σ i ) A = (σ i ) B σ i ~ constant, p K = p π COMMON FEATURE: Separation/Error = Constant Where else?? MORAL: Beware of event-by-event variables whose pdf’s do not appear in L

35 Avoiding Punzi Bias Include p(σ|A) and p(σ|B) in fit (But then, for example, particle identification may be determined more by momentum distribution than by PID) OR Fit each range of σ i separately, and add (N A ) i  (N A ) total, and similarly for B Incorrect method using L wrong uses weighted average of (f A ) j, assumed to be independent of j Talk by Catastini at PHYSTAT05