Download presentation
Presentation is loading. Please wait.
Published byDarrell Rogers Modified over 9 years ago
1
Review of Statistics
2
Estimation of the Population Mean Hypothesis Testing Confidence Intervals Comparing Means from Different Populations Scatterplots and Sample Correlation
3
Estimation of the Population Mean One natural way to estimate the population mean,, is simply to compute the sample average from a sample of n i.i.d. observations. This can also be motivated by law of large numbers. But, is not the only way to estimate. For example, Y 1 can be another estimator of.
4
Copyright © 2003 by Pearson Education, Inc. 3-4 Key Concept 3.1
5
In general, we want an estimator that gets as close as possible to the unknown true value, at least in some average sense. In other words, we want the sampling distribution of an estimator to be as tightly centered around the unknown value as possible. This leads to three specific desirable characteristics of an estimator.
6
Three desirable characteristics of an estimator. Let denote some estimator of, Unbiasedness: Consistency: Efficiency. Let be another estimator of, and suppose both and are unbiased. Then is said to be more efficient than if
7
Properties of It can be shown that E( )= and (from law of large numbers), is both unbiased and consistent. But, is efficient?
8
Examples of alternative estimators. Example 1: The first observation Y 1 ? Since E(Y 1 )=, Y 1 is an unbiased estimator of. But, if n≥2, is more efficient than Y 1.
9
Example 2: where n is assumed to be an even number. The mean of is and its variance is Thus is unbiased and, because Var( ) → 0 as n→∞, is consistent. However, is more efficient than.
10
In fact, is the most efficient estimator of among all unbiased estimators that are weighted averages of Y 1, …, Y n. (Weighted average implies that the estimators are all unbiased.)
11
Hypothesis Testing The hypothesis testing problem (for the mean): make a provisional decision, based on the evidence at hand, whether a null hypothesis is true, or instead that some alternative hypothesis is true. That is, test
12
Copyright © 2003 by Pearson Education, Inc. 3-12 Key Concept 3.5
13
Terminology Significance level; p-value; critical value Confidence interval; acceptances region, rejection region Size; power
14
p-value = probability of drawing a statistic (e.g. ) at least as adverse to the null as the value actually computed with your data, assuming that the null hypothesis is true. The significance level of a test is a pre-specified probability of incorrectly rejecting the null, when the null is true.
15
Calculating the p-value based on : where is the value of actually observed. To compute the p-value, you need to know the distribution of. If n is large, we can use the large-n normal approximation.
16
where denotes the standard deviation of the distribution of.
17
Calculating the p-value with Y known
18
Type I and Type II Error Type I Error (Red) , Type II Error (Blue) α /2 β 對立假設
19
Confidence Intervals A 95% confidence interval for is an interval that contains the true value of Y in 95% of repeated samples. Digression: What is random here? the confidence interval— it will differ from one sample to the next; the population parameter,, is not random.
20
In practice, is unknown—it must be estimated. Estimator of the variance of Y: Fact: If (Y 1, …, Y n ) are i.i.d. and E(Y 4 )< ∞, then Why does the law of large numbers apply? Because is a sample average. Technical note: we assume E(Y 4 )< ∞ because here the average is not of Y i, but of its square.
23
For the first term, Define, then, and W 1, …, W n are i.i.d. . Thus W 1, …, W n are i.i.d. and Var(W i ) < ∞, so Therefore, For the second term, because Therefore,
24
Computing the p-value with estimated:
25
The p-value and the significance level With a prespecified significance level (e.g. 5%): reject if |t| > 1.96. equivalently: reject if p ≤ 0.05. The p-value is sometimes called the marginal significance level.
26
What happened to the t-table and the degrees of freedom? Digression: the Student t distribution If Y i, i = 1,…, n is i.i.d. N( Y, ), then the t-statistic has the Student t-distribution with n – 1 degrees of freedom. The critical values of the Student t-distribution is tabulated in the back of all statistics books. Remember the recipe? Compute the t-statistic Compute the degrees of freedom, which is n – 1 Look up the 5% critical value If the t-statistic exceeds (in absolute value) this critical value, reject the null hypothesis.
27
Comments on this recipe and the Student t- distribution The theory of the t-distribution was one of the early triumphs of mathematical statistics. It is astounding, really: if Y is i.i.d. normal, then you can know the exact, finite-sample distribution of the t- statistic – it is the Student t. So, you can construct confidence intervals (using the Student t critical value) that have exactly the right coverage rate, no matter what the sample size. But…. 27
28
Comments on Student t distribution, ctd. If the sample size is moderate (several dozen) or large (hundreds or more), the difference between the t-distribution and N(0,1) critical values are negligible. Here are some 5% critical values for 2-sided tests: 28
29
Comments on Student t distribution, ctd. So, the Student-t distribution is only relevant when the sample size is very small; but in that case, for it to be correct, you must be sure that the population distribution of Y is normal. In economic data, the normality assumption is rarely credible. Here are the distributions of some economic data. Do you think earnings are normally distributed? Suppose you have a sample of n = 10 observations from one of these distributions – would you feel comfortable using the Student t distribution? 29
30
30
31
Comments on Student t distribution, ctd. Consider the t-statistic testing the hypothesis that two means (groups s, l) are equal: Even if the population distribution of Y in the two groups is normal, this statistic doesn’t have a Student t distribution! There is a statistic testing this hypothesis that has a normal distribution, the “pooled variance” t-statistic – see SW (Section 3.6) – however the pooled variance t-statistic is only valid if the variances of the normal distributions are the same in the two groups. Would you expect this to be true, say, for men’s v. women’s wages? 31
32
The Student-t distribution – summary The assumption that Y is distributed N( Y, ) is rarely plausible in practice (income? number of children?) For n > 30, the t-distribution and N(0,1) are very close (as n grows large, the t n–1 distribution converges to N(0,1)) The t-distribution is an artifact from days when sample sizes were small and “computers” were people For historical reasons, statistical software typically uses the t- distribution to compute p-values – but this is irrelevant when the sample size is moderate or large. For these reasons, in this class we will focus on the large-n approximation given by the CLT 32
33
Summary on The Student t-distribution If Y is distributed, then the t-statistic has the Student t- distribution (tabulated in back of all stats books) Some comments: For n > 30, the t-distribution and N(0,1) are very close. The assumption that Y is distributed is rarely plausible in practice (income? number of children?) The t-distribution is an historical artifact from days when sample sizes were very small. In this class, we won’t use the t distribution - we rely solely on the large-n approximation given by the CLT.
34
Confidence Intervals A 95% confidence interval for is an interval that contains the true value of Y in 95% of repeated samples. Digression: What is random here? the confidence interval— it will differ from one sample to the next; the population parameter,, is not random.
35
A 95% confidence interval can always be constructed as the set of values of not rejected by a hypothesis test with a 5% significance level.
36
This confidence interval relies on the large-n results that is approximately normally distributed and
37
Summary: From the assumptions of: (1) simple random sampling of a population, that is, (2) we developed, for large samples (large n): Theory of estimation (sampling distribution of ) Theory of hypothesis testing (large-n distribution of t-statistic and computation of the p-value). Theory of confidence intervals (constructed by inverting test statistic). Are assumptions (1) & (2) plausible in practice? Yes
38
Tests for Difference between Two Means Let be the mean hourly earning in the population of women recently graduated from college and let be population mean for recently graduated men. Consider the null hypothesis that earnings for these two populations differ by certain amount d, then
39
Replace population variances by sample variances, we have the standard error and the t-statistic is If both n m and n w are large, the t-statistic has a standard normal distribution.
40
40
42
Summarize the relationship between variables Scatterplots:
43
The population covariance and correlation can be estimated by the sample covariance and sample correlation. The sample covariance is The sample correlation is
44
It can be shown that under the assumptions that (X i, Y i ) are i.i.d. and that X i and Y i have finite fourth moments,
46
It is easy to see that the second term converges in probability to zero because so by Slutsky’s theorem.
47
By the definition of covariance, we have To apply the law of large numbers on the first term, we need to have which is satisfied since
48
The second inequality follows by applying the Cauchy-Schwartz inequality, and the last inequality follows because of the finite fourth moments for (X i, Y i ). The Cauchy-Schwartz inequality is
49
Applying the law of large numbers, we have Also,, therefore
50
Scatterplots and Sample Correlation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.