Download presentation
Presentation is loading. Please wait.
Published byEmmeline Burkett Modified over 9 years ago
1
Uncertainty and confidence intervals Statistical estimation methods, Finse Friday 10.9.2010, 12.45–14.05 Andreas Lindén
2
Outline Point estimates and uncertainty Sampling distribution –Standard error –Covariation between parameters Finding the VC-matrix for the parameter estimates –Analytical formulas –From the Hessian matrix –Bootstrapping The idea behind confidence intervals General methods for constructing confidence intervals of parameters –CI based on the central limit theorem –Profile likelihood CI –CI by bootstrapping
3
3 Point estimates and uncertainty The main output in any statistical model fitting are the parameter estimates –Point estimates — one value for each parameter –The effect sizes –Answers the question “how much” Point estimates are of little use without any assessment of uncertainty –Standard error –Confidence intervals –p-values –Estimated sampling distribution –Bayesian credible intervals –Plotting Bayesian posterior distribution
4
4 Sampling distribution The probability distribution of a parameter estimate –Calculated from a sample –Variability due to sampling effects Typically depends on sample size or the number of degrees of freedom (df) Examples of common sampling distributions –Student’s t-distribution –F-distribution –χ²-distribution
5
5 Degrees of freedom Y X In a linear regression df = n – 2
6
6 Properties of the sampling distribution The standard error (SE) of a parameter, is the estimated standard deviation of the sampling distribution –Square root of parameter variance Parameters are not necessarily unrelated –The sampling distribution of several parameters is multivariate –Example: regression slope and intercept
7
7 Linear regression – simulated data Param.abσ² True value4.001.000.80 Estim. 14.290.960.70 Estim. 24.130.970.36 Estim. 33.860.980.83 Estim. 43.771.040.75 Estim. 53.631.060.63 Estim. 64.390.930.72 Estim. 73.800.980.91 Estim. 83.781.060.92 Estim. 93.741.070.69 Estim. 104.620.840.50 ……… … Estim 1003.541.060.71
8
8 Properties of the sampling distribution The standard error (SE) of a parameter, is the estimated standard deviation of the sampling distribution –Square root of parameter variance Parameters are not necessarily unrelated –The sampling distribution of several parameters is multivariate –Example: regression slope and intercept 0.1531 -0.0273 0.0031 COV = -0.0273 0.0059 0.0002 0.0031 0.0002 0.0335 1.0000 -0.9085 0.0432 CORR = -0.9085 1.0000 0.0159 0.0432 0.0159 1.0000
9
9 Properties of the sampling distribution The standard error (SE) of a parameter, is the estimated standard deviation of the sampling distribution –Square root of parameter variance Parameters are not necessarily unrelated –The sampling distribution of several parameters is multivariate –Example: regression slope and intercept Methods to obtain the VC-matrix (or standard errors) for a set of parameters –Analytical formulas –Bootstrap –The inverse of the Hessian matrix
10
10 Parameter variances analytically For many common situations the SE and VC-matrix of a set of parameters can be calculated with analytical formulas Standard error of the sample mean Standard error of the estimated binomial probability
11
11 Bootstrap The bootstrap is a general and common resampling method Used to simulate the sampling distribution Information in the sample itself is used to mimic the original sampling procedure –Non-parametric bootstrap — sampling with replacement –Parametric bootstrap — simulation based on parameter estimates The procedure is repeated B times (e.g. B = 1000) To make inference from the bootstrapped estimates –Sample standard deviation = bootstrap estimate of SE –Sample VC-matrix = bootstrap estimate of VC-matrix –Mean = difference between bootstrap mean and original estimate is an estimate of bias
12
12 VC-matrix from the Hessian The Hessian matrix (H) –2 nd derivative of the (multivariate) negative log-likelihood at the ML-estimate –Typically given as an output by software for numerical optimization The inverse of the Hessian is an estimate of the parameters’ variance-covariance matrix
13
13 Confidence interval (CI) An frequentistic interval estimate of one or several parameters A fraction α of all correctly produced CI:s will fail to include the true parameter value –Trust your 95% CI and take the risk α = 0.05 NB! Should not be confused with Bayesian credible intervals –CI:s should not be thought to contain the parameter with 95% probability –The CI is based on the sampling distribution, not on an estimated probability distribution for the parameter of interest
14
14
15
15 CI based on central limit theorem The sum/mean of many random values are approximately normally distributed –Actually t-distributed with df depending on sample size and model complexity –Might matter with small sample size As a rule of thumb, an arbitrary parameter estimate ± 2*SE produce an approximate 95% confidence interval –With infinitely many observations ± 1.96*SE
16
16 CI from profile likelihood The profile deviance –The change in −2*log-likelihood, in comparison to the ML- estimate –Asymptotically χ²-distributed (assuming infinite sample size) Confidence intervals can be obtained as the range around the ML-estimate, for which the profile deviance is under a critical level –The 1 – α quantile from χ²-distribution –One-parameter -> df = 1 (e.g. 3.841 for α = 0.05) –k-dimensional profile deviance -> df = k
17
17 95% CI from profile deviance –2*LL Parameter value F min + 3.841 F min
18
18 2-D confidence regions Parameter a Parameter b 99% confidence region, deviance χ² df2 = 9.201 95% confidence region, deviance χ² df2 = 5.992
19
19 CI by bootstrapping A 100*(1 – α)% CI for a parameter can be calculated from the sampling distribution –The α / 2 and 1 – α /2 quantiles (e.g. 0.025 and 0.975 with α = 0.05) In bootstrapping, simply use the sample quantiles of simulated values
20
Exercises Data: The prevalence of an infectious disease in a human population is investigated. The infection is recorded with 100% detection efficiency. In a sample of N = 80 humans X = 18 infections were found. Model: Assume that infection (x = 0 or 1) of a host individual is an independent Bernoulli trial with probability p i, such that the probability of infection is constant over all hosts. (This equals a logistic regression with an intercept only. Host specific explanatory variables, such as age, condition, etc. could be used to improve the model of p i closer.)
21
Do the following in R: a)Calculate and plot the profile (log) likelihood of infection probability p b)What is the maximum likelihood estimate of p (called p̂ )? c)Construct 95% and 99% confidence intervals for p̂ based on the profile likelihood d)Calculate the analytic SE for p̂ e)Construct symmetric 95% confidence interval for p̂ based on the central limit theorem and the SE obtained in previous exercise f)Simulate and plot the sampling distribution of p̂ by parametric bootstrapping (B = 10000) g)Calculate the bootstrap SE of p̂ h)Construct 95% confidence interval for p̂ based on the bootstrap
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.