G. Cowan Cargese 2012 / Statistics for HEP / Lecture 21 Statistics for HEP Lecture 2: Discovery and Limits International School Cargèse August 2012 Glen.

Slides:



Advertisements
Similar presentations
Using the Profile Likelihood in Searches for New Physics / PHYSTAT 2011 G. Cowan 1 Using the Profile Likelihood in Searches for New Physics arXiv:
Advertisements

G. Cowan Statistics for HEP / LAL Orsay, 3-5 January 2012 / Lecture 2 1 Statistical Methods for Particle Physics Lecture 2: Tests based on likelihood ratios.
G. Cowan Statistics for HEP / LAL Orsay, 3-5 January 2012 / Lecture 3 1 Statistical Methods for Particle Physics Lecture 3: More on discovery and limits.
G. Cowan RHUL Physics Profile likelihood for systematic uncertainties page 1 Use of profile likelihood to determine systematic uncertainties ATLAS Top.
G. Cowan Statistics for HEP / NIKHEF, December 2011 / Lecture 3 1 Statistical Methods for Particle Physics Lecture 3: Limits for Poisson mean: Bayesian.
1 Nuisance parameters and systematic uncertainties Glen Cowan Royal Holloway, University of London IoP Half.
G. Cowan Statistical methods for HEP / Freiburg June 2011 / Lecture 3 1 Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion.
G. Cowan RHUL Physics Comment on use of LR for limits page 1 Comment on definition of likelihood ratio for limits ATLAS Statistics Forum CERN, 2 September,
G. Cowan Statistical methods for HEP / Birmingham 9 Nov Recent developments in statistical methods for particle physics Particle Physics Seminar.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan 2011 CERN Summer Student Lectures on Statistics / Lecture 41 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability.
G. Cowan Statistics for HEP / NIKHEF, December 2011 / Lecture 2 1 Statistical Methods for Particle Physics Lecture 2: Tests based on likelihood ratios.
G. Cowan Statistical Data Analysis / Stat 4 1 Statistical Data Analysis Stat 4: confidence intervals, limits, discovery London Postgraduate Lectures on.
G. Cowan Shandong seminar / 1 September Some Developments in Statistical Methods for Particle Physics Particle Physics Seminar Shandong University.
G. Cowan Discovery and limits / DESY, 4-7 October 2011 / Lecture 2 1 Statistical Methods for Discovery and Limits Lecture 2: Tests based on likelihood.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 7 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
G. Cowan Discovery and limits / DESY, 4-7 October 2011 / Lecture 3 1 Statistical Methods for Discovery and Limits Lecture 3: Limits for Poisson mean: Bayesian.
G. Cowan Statistics for HEP / NIKHEF, December 2011 / Lecture 4 1 Statistical Methods for Particle Physics Lecture 4: More on discovery and limits.
Graduierten-Kolleg RWTH Aachen February 2014 Glen Cowan
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
G. Cowan Statistical techniques for systematics page 1 Statistical techniques for incorporating systematic/theory uncertainties Theory/Experiment Interplay.
G. Cowan Aachen 2014 / Statistics for Particle Physics, Lecture 51 Statistical Methods for Particle Physics Lecture 5: systematics, Bayesian methods Graduierten-Kolleg.
G. Cowan Weizmann Statistics Workshop, 2015 / GDC Lecture 31 Statistical Methods for Particle Physics Lecture 3: asymptotics I; Asimov data set Statistical.
G. Cowan RHUL Physics page 1 Status of search procedures for ATLAS ATLAS-CMS Joint Statistics Meeting CERN, 15 October, 2009 Glen Cowan Physics Department.
G. Cowan Invisibles 2014 / Statistics for Particle Physics1 Statistical Methods for Particle Physics Invisibles School 8-13 July 2014 Château de Button.
G. Cowan, RHUL Physics Discussion on significance page 1 Discussion on significance ATLAS Statistics Forum CERN/Phone, 2 December, 2009 Glen Cowan Physics.
G. Cowan RHUL Physics LR test to determine number of parameters page 1 Likelihood ratio test to determine best number of parameters ATLAS Statistics Forum.
G. Cowan CERN Academic Training 2010 / Statistics for the LHC / Lecture 41 Statistics for the LHC Lecture 4: Bayesian methods and further topics Academic.
G. Cowan St. Andrews 2012 / Statistics for HEP / Lecture 31 Statistics for HEP Lecture 3: Further topics 69 th SUSSP LHC Physics St. Andrews August,
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
G. Cowan NExT Workshop, 2015 / GDC Lecture 11 Statistical Methods for Particle Physics Lecture 1: introduction & statistical tests Fifth NExT PhD Workshop:
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
G. Cowan Statistical methods for particle physics / Cambridge Recent developments in statistical methods for particle physics HEP Phenomenology.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
G. Cowan Statistical methods for particle physics / Wuppertal Recent developments in statistical methods for particle physics Particle Physics.
In Bayesian theory, a test statistics can be defined by taking the ratio of the Bayes factors for the two hypotheses: The ratio measures the probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Statistical methods for HEP / Freiburg June 2011 / Lecture 2 1 Statistical Methods for Discovery and Limits in HEP Experiments Day 2: Discovery.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan IHEP seminar / 19 August Some Developments in Statistical Methods for Particle Physics Particle Physics Seminar IHEP, Beijing 19 August,
G. Cowan St. Andrews 2012 / Statistics for HEP / Lecture 21 Statistics for HEP Lecture 2: Discovery and Limits 69 th SUSSP LHC Physics St. Andrews
G. Cowan iSTEP 2014, Beijing / Statistics for Particle Physics / Lecture 31 Statistical Methods for Particle Physics Lecture 3: systematic uncertainties.
G. Cowan RHUL Physics Statistical Issues for Higgs Search page 1 Statistical Issues for Higgs Search ATLAS Statistics Forum CERN, 16 April, 2007 Glen Cowan.
G. Cowan CERN Academic Training 2010 / Statistics for the LHC / Lecture 21 Statistics for the LHC Lecture 2: Discovery Academic Training Lectures CERN,
G. Cowan SOS 2010 / Statistical Tests and Limits -- lecture 31 Statistical Tests and Limits Lecture 3 IN2P3 School of Statistics Autrans, France 17—21.
G. Cowan SLAC Statistics Meeting / 4-6 June 2012 / Two Developments 1 Two developments in discovery tests: use of weighted Monte Carlo events and an improved.
G. Cowan CERN Academic Training 2012 / Statistics for HEP / Lecture 21 Statistics for HEP Lecture 2: Discovery and Limits Academic Training Lectures CERN,
Discussion on significance
Computing and Statistical Data Analysis / Stat 11
Statistics for HEP Lecture 3: More on Discovery and Limits
Statistics for the LHC Lecture 3: Setting limits
Some Statistical Tools for Particle Physics
Lecture 4 1 Probability (90 min.)
Statistical Methods for Particle Physics (II)
TAE 2017 / Statistics Lecture 3
Statistical Methods for Particle Physics Lecture 3: further topics
HCPSS 2016 / Statistics Lecture 1
TAE 2018 / Statistics Lecture 3
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
TAE 2018 Benasque, Spain 3-15 Sept 2018 Glen Cowan Physics Department
TAE 2018 / Statistics Lecture 4
Statistical Tests and Limits Lecture 2: Limits
TAE 2017 / Statistics Lecture 2
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Computing and Statistical Data Analysis / Stat 10
Statistical Methods for Particle Physics Lecture 3: further topics
Presentation transcript:

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 21 Statistics for HEP Lecture 2: Discovery and Limits International School Cargèse August 2012 Glen Cowan Physics Department Royal Holloway, University of London TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 22 Outline Lecture 1: Introduction and basic formalism Probability, statistical tests, parameter estimation. Lecture 2: Discovery and Limits Asymptotic formulae for discovery/limits Exclusion without experimental sensitivity, CLs, etc. Bayesian limits The Look-Elsewhere Effect

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 23 Recap on statistical tests Consider test of a parameter μ, e.g., proportional to signal rate. Result of measurement is a set of numbers x. To define test of μ, specify critical region w μ, such that probability to find x ∈ w μ is not greater than α (the size or significance level): (Must use inequality since x may be discrete, so there may not exist a subset of the data space with probability of exactly α.) Equivalently define a p-value p μ such that the critical region corresponds to p μ ≤ α. Often use, e.g., α = If observe x ∈ w μ, reject μ.

4 Large-sample approximations for prototype analysis using profile likelihood ratio Search for signal in a region of phase space; result is histogram of some variable x giving numbers: G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 signal where background strength parameter Assume the n i are Poisson distributed with expectation values Cowan, Cranmer, Gross, Vitells, arXiv: , EPJC 71 (2011) 1554

5 Prototype analysis (II) Often also have a subsidiary measurement that constrains some of the background and/or shape parameters: Assume the m i are Poisson distributed with expectation values G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 nuisance parameters (  s,  b,b tot ) Likelihood function is

6 The profile likelihood ratio Base significance test on the profile likelihood ratio: G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 maximizes L for Specified  maximize L The likelihood ratio of point hypotheses gives optimum test (Neyman-Pearson lemma); statistic above is near optimal. Advantage of λ(μ) is that in large sample limit, f(-2lnλ(μ)|μ) approaches a chi-square pdf for 1 degree of freedom (Wilks thm). profile likelihood

7 Test statistic for discovery Try to reject background-only (  = 0) hypothesis using G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 i.e. here only regard upward fluctuation of data as evidence against the background-only hypothesis. Note that even though here physically  ≥ 0, we allow to be negative. In large sample limit its distribution becomes Gaussian, and this will allow us to write down simple expressions for distributions of our test statistics.

8 p-value for discovery G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 Large q 0 means increasing incompatibility between the data and hypothesis, therefore p-value for an observed q 0,obs is will get formula for this later From p-value get equivalent significance,

9 Test statistic for upper limits For purposes of setting an upper limit on  one may use G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 Note for purposes of setting an upper limit, one does not regard an upwards fluctuation of the data as representing incompatibility with the hypothesized . From observed q  find p-value: 95% CL upper limit on  is highest value for which p-value is not less than where

10 Distribution of q 0 in large-sample limit Assuming approximations valid in the large sample (asymptotic) limit, we can write down the full distribution of q  as G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 The special case  ′ = 0 is a “half chi-square” distribution: In large sample limit, f(q 0 |0) independent of nuisance parameters; f(q 0 |μ′) depends on nuisance parameters through σ. Cowan, Cranmer, Gross, Vitells, arXiv: , EPJC 71 (2011) 1554

11 Cumulative distribution of q 0, significance From the pdf, the cumulative distribution of q  is found to be G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 The special case  ′ = 0 is The p-value of the  = 0 hypothesis is Therefore the discovery significance Z is simply Cowan, Cranmer, Gross, Vitells, arXiv: , EPJC 71 (2011) 1554

12 Distribution of q  in large-sample limit G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 Independent of nuisance parameters. Cowan, Cranmer, Gross, Vitells, arXiv: , EPJC 71 (2011) 1554

13 Monte Carlo test of asymptotic formula G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 Here take  = 1. Asymptotic formula is good approximation to 5  level (q 0 = 25) already for b ~ 20. Cowan, Cranmer, Gross, Vitells, arXiv: , EPJC 71 (2011) 1554

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 214 Nuisance parameters In general our model of the data is not perfect: x L ( x | θ ) model: truth: Can improve model by including additional adjustable parameters. Nuisance parameter ↔ systematic uncertainty. Some point in the parameter space of the enlarged model should be “true”. Presence of nuisance parameter decreases sensitivity of analysis to the parameter of interest (e.g., increases variance of estimate).

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 215 p-values in cases with nuisance parameters Suppose we have a statistic q θ that we use to test a hypothesized value of a parameter θ, such that the p-value of θ is But what values of ν to use for f (q θ |θ, ν)? Fundamentally we want to reject θ only if p θ < α for all ν. → “exact” confidence interval Recall that for statistics based on the profile likelihood ratio, the distribution f (q θ |θ, ν) becomes independent of the nuisance parameters in the large-sample limit. But in general for finite data samples this is not true; one may be unable to reject some θ values if all values of ν must be considered, even those strongly disfavoured by the data (resulting interval for θ “overcovers”).

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 216 Profile construction (“hybrid resampling”) Compromise procedure is to reject θ if p θ ≤ α where the p-value is computed assuming the value of the nuisance parameter that best fits the data for the specified θ: “double hat” notation means value of parameter that maximizes likelihood for the given θ. The resulting confidence interval will have the correct coverage for the points. Elsewhere it may under- or overcover, but this is usually as good as we can do (check with MC if crucial or small sample problem).

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 217 “Hybrid frequentist-Bayesian” method Alternatively, suppose uncertainty in ν is characterized by a Bayesian prior π(ν). Can use the marginal likelihood to model the data: This does not represent what the data distribution would be if we “really” repeated the experiment, since then ν would not change. But the procedure has the desired effect. The marginal likelihood effectively builds the uncertainty due to ν into the model. Use this now to compute (frequentist) p-values → result has hybrid “frequentist-Bayesian” character.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 218 Low sensitivity to μ It can be that the effect of a given hypothesized μ is very small relative to the background-only (μ = 0) prediction. This means that the distributions f(q μ |μ) and f(q μ |0) will be almost the same:

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 219 Having sufficient sensitivity In contrast, having sensitivity to μ means that the distributions f(q μ |μ) and f(q μ |0) are more separated: That is, the power (probability to reject μ if μ = 0) is substantially higher than α. Use this power as a measure of the sensitivity.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 220 Spurious exclusion Consider again the case of low sensitivity. By construction the probability to reject μ if μ is true is α (e.g., 5%). And the probability to reject μ if μ = 0 (the power) is only slightly greater than α. This means that with probability of around α = 5% (slightly higher), one excludes hypotheses to which one has essentially no sensitivity (e.g., m H = 1000 TeV). “Spurious exclusion”

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 221 Ways of addressing spurious exclusion The problem of excluding parameter values to which one has no sensitivity known for a long time; see e.g., In the 1990s this was re-examined for the LEP Higgs search by Alex Read and others and led to the “CL s ” procedure for upper limits. Unified intervals also effectively reduce spurious exclusion by the particular choice of critical region.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 222 The CL s procedure f (Q|b) f (Q| s+b) p s+b pbpb In the usual formulation of CL s, one tests both the μ = 0 (b) and μ > 0 (μs+b) hypotheses with the same statistic Q =  2ln L s+b /L b :

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 223 The CL s procedure (2) As before, “low sensitivity” means the distributions of Q under b and s+b are very close: f (Q|b) f (Q|s+b) p s+b pbpb

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 224 The CL s solution (A. Read et al.) is to base the test not on the usual p-value (CL s+b ), but rather to divide this by CL b (~ one minus the p-value of the b-only hypothesis), i.e., Define: Reject s+b hypothesis if: Reduces “effective” p-value when the two distributions become close (prevents exclusion if sensitivity is low). f (Q|b) f (Q|s+b) CL s+b = p s+b 1  CL b = p b The CL s procedure (3)

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 225 Setting upper limits on μ = σ/σ SM Carry out the CLs procedure for the parameter μ = σ/σ SM, resulting in an upper limit μ up. In, e.g., a Higgs search, this is done for each value of m H. At a given value of m H, we have an observed value of μ up, and we can also find the distribution f(μ up |0): ±1  (green) and ±2  (yellow) bands from toy MC; Vertical lines from asymptotic formulae.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 226 How to read the green and yellow limit plots ATLAS, Phys. Lett. B 710 (2012) For every value of m H, find the CLs upper limit on μ. Also for each m H, determine the distribution of upper limits μ up one would obtain under the hypothesis of μ = 0. The dashed curve is the median μ up, and the green (yellow) bands give the ± 1σ (2σ) regions of this distribution.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 227 How to read the p 0 plot ATLAS, Phys. Lett. B 710 (2012) The “local” p 0 means the p-value of the background-only hypothesis obtained from the test of μ = 0 at each individual m H, without any correct for the Look-Elsewhere Effect. The “Sig. Expected” (dashed) curve gives the median p 0 under assumption of the SM Higgs (μ = 1) at each m H.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 228 How to read the “blue band” On the plot of versus m H, the blue band is defined by i.e., it approximates the 1-sigma error band (68.3% CL conf. int.) ATLAS, Phys. Lett. B 710 (2012) 49-66

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 229 The Bayesian approach to limits In Bayesian statistics need to start with ‘prior pdf’  (  ), this reflects degree of belief about  before doing the experiment. Bayes’ theorem tells how our beliefs should be updated in light of the data x: Integrate posterior pdf p(  | x) to give interval with any desired probability content. For e.g. n ~ Poisson(s+b), 95% CL upper limit on s from

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 230 Bayesian prior for Poisson parameter Include knowledge that s ≥0 by setting prior  (s) = 0 for s<0. Could try to reflect ‘prior ignorance’ with e.g. Not normalized but this is OK as long as L(s) dies off for large s. Not invariant under change of parameter — if we had used instead a flat prior for, say, the mass of the Higgs boson, this would imply a non-flat prior for the expected number of Higgs events. Doesn’t really reflect a reasonable degree of belief, but often used as a point of reference; or viewed as a recipe for producing an interval whose frequentist properties can be studied (coverage will depend on true s).

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 231 Bayesian interval with flat prior for s Solve numerically to find limit s up. For special case b = 0, Bayesian upper limit with flat prior numerically same as one-sided frequentist case (‘coincidence’). Otherwise Bayesian limit is everywhere greater than the one-sided frequentist limit, and here (Poisson problem) it coincides with the CLs limit. Never goes negative. Doesn’t depend on b if n = 0.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 232 Priors from formal rules Because of difficulties in encoding a vague degree of belief in a prior, one often attempts to derive the prior from formal rules, e.g., to satisfy certain invariance principles or to provide maximum information gain for a certain set of measurements. Often called “objective priors” Form basis of Objective Bayesian Statistics The priors do not reflect a degree of belief (but might represent possible extreme cases). In Objective Bayesian analysis, can use the intervals in a frequentist way, i.e., regard Bayes’ theorem as a recipe to produce an interval with certain coverage properties.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 233 Priors from formal rules (cont.) For a review of priors obtained by formal rules see, e.g., Formal priors have not been widely used in HEP, but there is recent interest in this direction, especially the reference priors of Bernardo and Berger; see e.g. L. Demortier, S. Jain and H. Prosper, Reference priors for high energy physics, Phys. Rev. D 82 (2010) , arXiv: D. Casadei, Reference analysis of the signal + background model in counting experiments, JINST 7 (2012) 01012; arXiv:

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 234 Jeffreys’ prior According to Jeffreys’ rule, take prior according to where is the Fisher information matrix. One can show that this leads to inference that is invariant under a transformation of parameters. For a Gaussian mean, the Jeffreys’ prior is constant; for a Poisson mean  it is proportional to 1/√ .

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 235 Jeffreys’ prior for Poisson mean Suppose n ~ Poisson(  ). To find the Jeffreys’ prior for , So e.g. for  = s + b, this means the prior  (s) ~ 1/√(s + b), which depends on b. Note this is not designed as a degree of belief about s.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 236 Bayesian limits on s with uncertainty on b Consider n ~ Poisson(s+b) and take e.g. as prior probabilities Put this into Bayes’ theorem, Marginalize over the nuisance parameter b, Then use p(s|n) to find intervals for s with any desired probability content.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 237 The Look-Elsewhere Effect Gross and Vitells, EPJC 70: ,2010, arXiv: Suppose a model for a mass distribution allows for a peak at a mass m with amplitude  The data show a bump at a mass m 0. How consistent is this with the no-bump (  = 0) hypothesis?

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 238 Local p-value First, suppose the mass m 0 of the peak was specified a priori. Test consistency of bump with the no-signal (  = 0) hypothesis with e.g. likelihood ratio where “fix” indicates that the mass of the peak is fixed to m 0. The resulting p-value gives the probability to find a value of t fix at least as great as observed at the specific mass m 0 and is called the local p-value.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 239 Global p-value But suppose we did not know where in the distribution to expect a peak. What we want is the probability to find a peak at least as significant as the one observed anywhere in the distribution. Include the mass as an adjustable parameter in the fit, test significance of peak using (Note m does not appear in the  = 0 model.)

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 240 Distributions of t fix, t float For a sufficiently large data sample, t fix ~chi-square for 1 degree of freedom (Wilks’ theorem). For t float there are two adjustable parameters,  and m, and naively Wilks theorem says t float ~ chi-square for 2 d.o.f. In fact Wilks’ theorem does not hold in the floating mass case because on of the parameters (m) is not-defined in the  = 0 model. So getting t float distribution is more difficult. Gross and Vitells

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 241 Approximate correction for LEE We would like to be able to relate the p-values for the fixed and floating mass analyses (at least approximately). Gross and Vitells show the p-values are approximately related by where 〈 N(c) 〉 is the mean number “upcrossings” of  2ln L in the fit range based on a threshold and where Z local = Φ  1 (1 –  p local ) is the local significance. So we can either carry out the full floating-mass analysis (e.g. use MC to get p-value), or do fixed mass analysis and apply a correction factor (much faster than MC). Gross and Vitells

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 242 Upcrossings of  2lnL 〈 N(c) 〉 can be estimated from MC (or the real data) using a much lower threshold c 0 : Gross and Vitells The Gross-Vitells formula for the trials factor requires 〈 N(c) 〉, the mean number “upcrossings” of  2ln L in the fit range based on a threshold c = t fix = Z fix 2. In this way 〈 N(c) 〉 can be estimated without need of large MC samples, even if the the threshold c is quite high.

43 G. Cowan Multidimensional look-elsewhere effect Generalization to multiple dimensions: number of upcrossings replaced by expectation of Euler characteristic: Applications: astrophysics (coordinates on sky), search for resonance of unknown mass and width,... Cargese 2012 / Statistics for HEP / Lecture 2 Vitells and Gross, Astropart. Phys. 35 (2011) ; arXiv:

Remember the Look-Elsewhere Effect is when we test a single model (e.g., SM) with multiple observations, i..e, in mulitple places. Note there is no look-elsewhere effect when considering exclusion limits. There we test specific signal models (typically once) and say whether each is excluded. With exclusion there is, however, the analogous issue of testing many signal models (or parameter values) and thus excluding some even in the absence of signal (“spurious exclusion”) Approximate correction for LEE should be sufficient, and one should also report the uncorrected significance. “There's no sense in being precise when you don't even know what you're talking about.” –– John von Neumann 44 G. Cowan Summary on Look-Elsewhere Effect Cargese 2012 / Statistics for HEP / Lecture 2

Common practice in HEP has been to claim a discovery if the p-value of the no-signal hypothesis is below 2.9 × 10  7, corresponding to a significance Z = Φ  1 (1 – p) = 5 (a 5σ effect). There a number of reasons why one may want to require such a high threshold for discovery: The “cost” of announcing a false discovery is high. Unsure about systematics. Unsure about look-elsewhere effect. The implied signal may be a priori highly improbable (e.g., violation of Lorentz invariance). 45 G. Cowan Why 5 sigma? Cargese 2012 / Statistics for HEP / Lecture 2

But the primary role of the p-value is to quantify the probability that the background-only model gives a statistical fluctuation as big as the one seen or bigger. It is not intended as a means to protect against hidden systematics or the high standard required for a claim of an important discovery. In the processes of establishing a discovery there comes a point where it is clear that the observation is not simply a fluctuation, but an “effect”, and the focus shifts to whether this is new physics or a systematic. Providing LEE is dealt with, that threshold is probably closer to 3σ than 5σ. 46 G. Cowan Why 5 sigma (cont.)? Cargese 2012 / Statistics for HEP / Lecture 2

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 247 Summary of Lecture 2 Confidence intervals obtained from inversion of a test of all parameter values. Freedom to choose e.g. one- or two-sided test, often based on a likelihood ratio statistic. Distributions of likelihood-ratio statistics can be written down in simple form for large-sample (asymptotic) limit. Usual procedure for upper limit based on one-sided test can reject parameter values to which one has no sensitivity. CLs, Bayesian methods both address this issue (and coincide in important special cases) Look-elsewhere effect Approximate correction should be sufficient

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 248 Extra slides

49 Unified (Feldman-Cousins) intervals We can use directly G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 as a test statistic for a hypothesized . where Large discrepancy between data and hypothesis can correspond either to the estimate for  being observed high or low relative to . This is essentially the statistic used for Feldman-Cousins intervals (here also treats nuisance parameters). G. Feldman and R.D. Cousins, Phys. Rev. D 57 (1998) 3873.

50 Distribution of t  Using Wald approximation, f (t  |  ′) is noncentral chi-square for one degree of freedom: G. Cowan Cargese 2012 / Statistics for HEP / Lecture 2 Special case of  =  ′ is chi-square for one d.o.f. (Wilks). The p-value for an observed value of t  is and the corresponding significance is

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 251 Feldman-Cousins discussion The initial motivation for Feldman-Cousins (unified) confidence intervals was to eliminate null intervals. The F-C limits are based on a likelihood ratio for a test of μ with respect to the alternative consisting of all other allowed values of μ (not just, say, lower values). The interval’s upper edge is higher than the limit from the one- sided test, and lower values of μ may be excluded as well. A substantial downward fluctuation in the data gives a low (but nonzero) limit. This means that when a value of μ is excluded, it is because there is a probability α for the data to fluctuate either high or low in a manner corresponding to less compatibility as measured by the likelihood ratio.

Cargese 2012 / Statistics for HEP / Lecture 252 Upper/lower edges of F-C interval for μ versus b for n ~ Poisson(μ+b) Lower edge may be at zero, depending on data. For n = 0, upper edge has (weak) dependence on b. Feldman & Cousins, PRD 57 (1998) 3873 G. Cowan

Cargese 2012 / Statistics for HEP / Lecture 253 Discovery significance for n ~ Poisson(s + b) Consider again the case where we observe n events, model as following Poisson distribution with mean s + b (assume b is known). 1) For an observed n, what is the significance Z 0 with which we would reject the s = 0 hypothesis? 2) What is the expected (or more precisely, median ) Z 0 if the true value of the signal rate is s?

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 254 Gaussian approximation for Poisson significance For large s + b, n → x ~ Gaussian( ,  ),  = s + b,  = √(s + b). For observed value x obs, p-value of s = 0 is Prob(x > x obs | s = 0),: Significance for rejecting s = 0 is therefore Expected (median) significance assuming signal rate s is

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 255 Better approximation for Poisson significance Likelihood function for parameter s is or equivalently the log-likelihood is Find the maximum by setting gives the estimator for s:

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 256 Approximate Poisson significance (continued) The likelihood ratio statistic for testing s = 0 is For sufficiently large s + b, (use Wilks’ theorem), To find median[Z 0 |s+b], let n → s + b (i.e., the Asimov data set): This reduces to s/√b for s << b.

G. Cowan Cargese 2012 / Statistics for HEP / Lecture 257 n ~ Poisson(  s+b), median significance, assuming  = 1, of the hypothesis  = 0 “Exact” values from MC, jumps due to discrete data. Asimov √q 0,A good approx. for broad range of s, b. s/√b only good for s « b. CCGV, arXiv:

Reference priors J. Bernardo, L. Demortier, M. Pierini Maximize the expected Kullback–Leibler divergence of posterior relative to prior: This maximizes the expected posterior information about θ when the prior density is π(θ). Finding reference priors “easy” for one parameter: G. Cowan 58Cargese 2012 / Statistics for HEP / Lecture 2 (PHYSTAT 2011)

Reference priors (2) J. Bernardo, L. Demortier, M. Pierini Actual recipe to find reference prior nontrivial; see references from Bernardo’s talk, website of Berger ( and also Demortier, Jain, Prosper, PRD 82:33, arXiv: : Prior depends on order of parameters. (Is order dependence important? Symmetrize? Sample result from different orderings?) G. Cowan 59Cargese 2012 / Statistics for HEP / Lecture 2 (PHYSTAT 2011)