Download presentation
Presentation is loading. Please wait.
Published byAda Howard Modified over 9 years ago
1
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan Course web page: www.pp.rhul.ac.uk/~cowan/stat_course.html
2
G. Cowan Computing and Statistical Data Analysis / Stat 9 2 Example of least squares fit Fit a polynomial of order p:
3
G. Cowan Computing and Statistical Data Analysis / Stat 9 3 Variance of LS estimators In most cases of interest we obtain the variance in a manner similar to ML. E.g. for data ~ Gaussian we have and so or for the graphical method we take the values of where 1.0
4
G. Cowan Computing and Statistical Data Analysis / Stat 9 4 Two-parameter LS fit
5
G. Cowan Computing and Statistical Data Analysis / Stat 9 5 Goodness-of-fit with least squares The value of the 2 at its minimum is a measure of the level of agreement between the data and fitted curve: It can therefore be employed as a goodness-of-fit statistic to test the hypothesized functional form (x; ). We can show that if the hypothesis is correct, then the statistic t = 2 min follows the chi-square pdf, where the number of degrees of freedom is n d = number of data points - number of fitted parameters
6
G. Cowan Computing and Statistical Data Analysis / Stat 9 6 Goodness-of-fit with least squares (2) The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if 2 min ≈ n d the fit is ‘good’. More generally, find the p-value: E.g. for the previous example with 1st order polynomial (line), whereas for the 0th order polynomial (horizontal line), This is the probability of obtaining a 2 min as high as the one we got, or higher, if the hypothesis is correct.
7
G. Cowan Computing and Statistical Data Analysis / Stat 9 7 Goodness-of-fit vs. statistical errors
8
G. Cowan Computing and Statistical Data Analysis / Stat 9 8 Goodness-of-fit vs. stat. errors (2)
9
G. Cowan Computing and Statistical Data Analysis / Stat 9 9 LS with binned data
10
G. Cowan Computing and Statistical Data Analysis / Stat 9 10 LS with binned data (2)
11
G. Cowan Computing and Statistical Data Analysis / Stat 9 11 LS with binned data — normalization
12
G. Cowan Computing and Statistical Data Analysis / Stat 9 12 LS normalization example
13
G. Cowan Computing and Statistical Data Analysis / Stat 9 13 Using LS to combine measurements
14
G. Cowan Computing and Statistical Data Analysis / Stat 9 14 Combining correlated measurements with LS
15
G. Cowan Computing and Statistical Data Analysis / Stat 9 15 Example: averaging two correlated measurements
16
G. Cowan Computing and Statistical Data Analysis / Stat 9 16 Negative weights in LS average
17
G. Cowan Computing and Statistical Data Analysis / Stat 9 17 Interval estimation — introduction Often use +/ the estimated standard deviation of the estimator. In some cases, however, this is not adequate: estimate near a physical boundary, e.g., an observed event rate consistent with zero. In addition to a ‘point estimate’ of a parameter we should report an interval reflecting its statistical uncertainty. Desirable properties of such an interval may include: communicate objectively the result of the experiment; have a given probability of containing the true parameter; provide information needed to draw conclusions about the parameter possibly incorporating stated prior beliefs. We will look briefly at Frequentist and Bayesian intervals.
18
G. Cowan Computing and Statistical Data Analysis / Stat 9 18 Frequentist confidence intervals Consider an estimator for a parameter and an estimate We also need for all possible its sampling distribution Specify upper and lower tail probabilities, e.g., = 0.05, = 0.05, then find functions u ( ) and v ( ) such that:
19
G. Cowan Computing and Statistical Data Analysis / Stat 9 19 Confidence interval from the confidence belt Find points where observed estimate intersects the confidence belt. The region between u ( ) and v ( ) is called the confidence belt. This gives the confidence interval [a, b] Confidence level = 1 = probability for the interval to cover true value of the parameter (holds for any possible true ).
20
G. Cowan Computing and Statistical Data Analysis / Stat 9 20 Confidence intervals by inverting a test Confidence intervals for a parameter can be found by defining a test of the hypothesized value (do this for all ): Specify values of the data that are ‘disfavoured’ by (critical region) such that P(data in critical region) ≤ for a prespecified , e.g., 0.05 or 0.1. If data observed in the critical region, reject the value . Now invert the test to define a confidence interval as: set of values that would not be rejected in a test of size (confidence level is 1 ). The interval will cover the true value of with probability ≥ 1 . Equivalent to confidence belt construction; confidence belt is acceptance region of a test.
21
Computing and Statistical Data Analysis / Stat 9 21 Relation between confidence interval and p-value Equivalently we can consider a significance test for each hypothesized value of , resulting in a p-value, p .. If p < , then we reject . The confidence interval at CL = 1 – consists of those values of that are not rejected. E.g. an upper limit on is the greatest value for which p ≥ In practice find by setting p = and solve for . G. Cowan
22
Computing and Statistical Data Analysis / Stat 9 22 Confidence intervals in practice The recipe to find the interval [a, b] boils down to solving → a is hypothetical value of such that → b is hypothetical value of such that
23
G. Cowan Computing and Statistical Data Analysis / Stat 9 23 Meaning of a confidence interval
24
G. Cowan Computing and Statistical Data Analysis / Stat 9 24 Central vs. one-sided confidence intervals
25
G. Cowan Computing and Statistical Data Analysis / Stat 9 25 Intervals from the likelihood function In the large sample limit it can be shown for ML estimators: defines a hyper-ellipsoidal confidence region, If then (n-dimensional Gaussian, covariance V)
26
G. Cowan Computing and Statistical Data Analysis / Stat 9 26 Approximate confidence regions from L( ) So the recipe to find the confidence region with CL = 1 is: For finite samples, these are approximate confidence regions. Coverage probability not guaranteed to be equal to ; no simple theorem to say by how far off it will be (use MC). Remember here the interval is random, not the parameter.
27
G. Cowan Computing and Statistical Data Analysis / Stat 9 27 Example of interval from ln L( ) For n=1 parameter, CL = 0.683, Q = 1.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.