Confidence Intervals Chapter 11.

Slides:



Advertisements
Similar presentations
STATISTICAL INFERENCE PART V CONFIDENCE INTERVALS 1.
Advertisements

Sampling: Final and Initial Sample Size Determination
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
1 Confidence Interval for a Mean. 2 Given A random sample of size n from a Normal population or a non Normal population where n is sufficiently large.
8 Statistical Intervals for a Single Sample CHAPTER OUTLINE
Chapter 8 Estimation: Single Population
Chapter 7 Estimation: Single Population
8-1 Introduction In the previous chapter we illustrated how a parameter can be estimated from sample data. However, it is important to understand how.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Statistical Intervals Based on a Single Sample.
Standard error of estimate & Confidence interval.
Review of normal distribution. Exercise Solution.
Chapter 7 Estimation: Single Population
Estimation Basic Concepts & Estimation of Proportions
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Confidence Interval Estimation.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
8 Sampling Distribution of the Mean Chapter8 p Sampling Distributions Population mean and standard deviation,  and   unknown Maximal Likelihood.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
11 Confidence Intervals – Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample.
Confidence Interval & Unbiased Estimator Review and Foreword.
Chapter 9: One- and Two-Sample Estimation Problems: 9.1 Introduction: · Suppose we have a population with some unknown parameter(s). Example: Normal( ,
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Estimating a Population Proportion ADM 2304 – Winter 2012 ©Tony Quon.
Chapter 14 Single-Population Estimation. Population Statistics Population Statistics:  , usually unknown Using Sample Statistics to estimate population.
Statistics for Business and Economics 7 th Edition Chapter 7 Estimation: Single Population Copyright © 2010 Pearson Education, Inc. Publishing as Prentice.
Week 101 Test on Pairs of Means – Case I Suppose are iid independent of that are iid. Further, suppose that n 1 and n 2 are large or that are known. We.
Chapters 22, 24, 25 Inference for Two-Samples. Confidence Intervals for 2 Proportions.
Inference: Conclusion with Confidence
Chapter 5 Confidence Interval
Statistical Intervals for a Single Sample
Statistical Estimation
Standard Errors Beside reporting a value of a point estimate we should consider some indication of its precision. For this we usually quote standard error.
Supplemental Lecture Notes
Inference for Two-Samples
3. The X and Y samples are independent of one another.
Sampling distribution
Confidence Intervals – Introduction
Chapter 6 Confidence Intervals.
Inference: Conclusion with Confidence
Consolidation & Review
Parameter, Statistic and Random Samples
CI for μ When σ is Unknown
Elementary Statistics: Picturing The World
STATISTICAL INFERENCE PART IV
Chapter 12 Hypothesis testing.
ASV Chapters 1 - Sample Spaces and Probabilities
STATISTICS INTERVAL ESTIMATION
Confidence Intervals Chapter 10 Section 1.
Lecture 2 Interval Estimation
Why does sampling work?.
Chapter 6 Confidence Intervals.
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 15 SUMMARY Chapter Specifics
Two Sample Problem Sometimes we will be interested in comparing means in two independent populations (e.g. mean income for male and females). We consider.
Stat Lab 9.
CS 594: Empirical Methods in HCC Introduction to Bayesian Analysis
IE 355: Quality and Applied Statistics I Confidence Intervals
Continuous Probability Distributions
Chapter 9 Chapter 9 – Point estimation
Chapter 8: Confidence Intervals
Chapter 8 Estimation.
Statistical Inference I
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Interval Estimation Download this presentation.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Copyright © 2015 Elsevier Inc. All rights reserved.
How Confident Are You?.
Presentation transcript:

Confidence Intervals Chapter 11

What’s a 95% confidence interval? Roughly, it is an interval that we are 95% confident contains the parameter of interest. More carefully, it is the outcome of a random interval, where the random contains the parameter of interest with probability 95%. Note that it is the random interval the contains the parameter of interest with probability 95%. The outcome of the random interval either does or does not contain it. If we were to repeat the experiment many times independently, creating a 95% confidence interval each time, about 95% of these confidence intervals would contain the true value of the parameter.

Outline The basic steps to create any CI One sided CIs Equal tailed CIs Pivotal quantity Location and scale parameters CI for a function of a parameter Approximate CIs

The basic steps to create any CI Step 1: Write down a probability statement involving the parameter of interest. Step 2: Isolate the parameter in the probability statement. Step 3: Write down the random interval. This is an interval estimator. Step 4: Write down the outcome of the random interval. This is an interval estimate, but we’ll just call in a confidence interval like everyone else.

Example 11. 1. 1 Data collected from EXP(μ) population Example 11.1.1 Data collected from EXP(μ) population. Outcome of sample mean is 37. Find a 90% equal tailed CI for μ. Step 1: Write down a probability statement involving the parameter of interest. Step 2: Isolate the parameter in the probability statement. Step 3: Write down the random interval. This is an interval estimator. Step 4: Write down the outcome of the random interval. This is an interval estimate, but we’ll just call in a confidence interval like everyone else.

A few variations: Repeat to find a 90% upper bound on μ. Why might a person want this? Repeat again to find a 90% lower bound on μ. What about a funky, non-equal tail, two-sided CI? Is there a ‘best’ confidence interval? What would that mean?

Pivotal quantity A pivotal quantity is function of the random sample and the parameter of interest whose distribution does not depend on unknown stuff, e.g. the parameter of interest. The first step in deriving a CI is to write down a probability statement. It is super convenient if this involves a pivotal quantity. Example: Pivotal quantity based on a random sample from a normal population.

Finding pivotal quantities A location parameter is one that shifts the ‘location’ of the distribution, i.e. f(x, θ) = f0(x-θ). Example Consider the density function for EXP(1, η): f(x, η) = e-(x-η) 1{x-η>0}. Theorem If Xi ~ i.i.d. f(x, θ) θ is a location parameter MLE is a maximum likelihood estimator for θ Then MLE – θ is a pivotal quantity for θ.

Examples of finding pivotal quantities Example 1: Find a CI for μ based on a random sample from N(μ, 1). Example 2: Find a CI for η based on a random sample from EXP(1, η). Example 3: Find a CI for σ2 based on a random sample from N(0, σ2). Can it be done?

Finding pivotal quantities A scale parameter is one that satisfies, i.e. f(x, θ) = (1/θ)f0(x/θ). Theorem If Xi ~ i.i.d. f(x, θ) θ is a scale parameter MLE is a maximum likelihood estimator for θ Then MLE/θ is a pivotal quantity for θ.

Examples of finding pivotal quantities Example 3: Find a CI for σ2 based on a random sample from N(0, σ2). Example 4: Find a CI for μ based on a random sample from N(μ, σ2). Can it be done?

Finding pivotal quantities Theorem If Xi ~ i.i.d. f(x, θ1, θ2) θ1 is a location parameter θ2 is a scale parameter MLE1 and MLE2 are maximum likelihood estimators for θ1 and θ2, respectively. Then (MLE1 - θ1)/MLE2 is a pivotal quantity for θ1; AND (MLE2)/θ2 is a pivotal quantity for θ2. Why is (MLE1 - θ1)/ θ2 not pivotal for θ1?

Examples of finding pivotal quantities Example 4: Find a CI for μ based on a random sample from N(μ, σ2). Example 5: Find a CI for σ2 based on a random sample from N(μ, σ2).

Confidence interval for functions of parameters (3, 4) is a 95% CI for λ Can I find a 95% CI for eλ without doing hard work? What about e-λ? (-5, 10) is a 99% CI for μ Can I find a CI for μ2 without doing hard work?

Approximate confidence intervals Example Xi ~ i.i.d. BER(p) Sample size = 50 Find an equal tailed 90% CI for p Is there a pivotal quantity? What about an asymptotically pivotal quantity?

Approximate confidence intervals Example Xi ~ i.i.d. POI(λ) Sample size = 50 Find an equal tailed 90% CI for λ Use an asymptotically pivotal quantity.

IQ tested on 30 subjects on NZT and 40 different subjects on placebo Example 1: Xi ~ i.i.d. N(μ1, 1); N = 30 Yk ~ i.i.d. N(μ2, 2); N = 40 Xi is independent of Yk for all i, k. Find a pivotal quantity for μ1-μ2. Example 2: Xi ~ i.i.d. N(μ1, σ12); N = 30 Yk ~ i.i.d. N(μ2, σ22); N = 40 Find a pivotal quantity for σ12/σ22.

IQ tested on 20 subjects on NZT and the same 20 subjects on placebo Example 3: Xi ~ i.i.d. N(μ1, 1); N = 20 Yi ~ i.i.d. N(μ2, 1); N = 20 Cov(Xi, Yi) = 1/2 Xi and Yi are measures of IQ before and after NZT for subject i. Find a pivotal quantity for μ1-μ2. Example 4: Xi ~ i.i.d. N(μ, σ2); N = 20 Find a pivotal quantity for μ.

Hard example (11.4.1) – no pivotal quantity Xi ~ i.i.d. f(x; θ) = (1/θ2)e-(x-θ)/θ21{x>θ} N = 100 In the absence of a pivotal quantity, consider a sufficient statistic. The sample mean and first order statistic are jointly sufficient. Let’s try using X1:100. Find a 90% CI for θ. See lecture notes for details.

Interpretation This comes from the main BATE paper: https://www.cpccrn.org/documents/55_Dalton2017_PMID28328243.pdf