When we free ourselves of desire, we will know serenity and freedom.
Inferences about Population Means Sections 5.1 -5.7 Estimation Statistical tests: z test and t test Sample size selection
Estimation To estimate a numerical summary in population (parameter): Point estimator — the same numerical summary in sample; a statistic Interval estimator — a “random” interval which includes the parameter most of time
Confidence Interval Ideally, a short interval with high confidence interval is preferred.
Estimation for m Point estimator Confidence interval Normality with known s or large sample: Z interval Normality with unknown s: t interval
One Population
Sample Size for Estimating m if s is unknown, use s from prior data or the upper bound of s. Where E is the largest tolerable error.
The Logic of Hypothesis Test “Assume Ho is a possible truth until proven false” Analogical to “Presumed innocent until proven guilty” The logic of the US judicial system
Steps in Hypothesis Test Set up the null (Ho) and alternative (Ha) hypotheses Find an appropriate test statistic (T.S.) Find the rejection region (R.R.) Reject Ho if the observed test statistic falls in R.R. Report the result in the context of the situation
Determine Ho and Ha The hypothesis with “=“ must be the Ho The hypothesis we favor (called the research hypothesis) goes to Ha, if possible. Eg. Example 5.7 (p.238)
Type II Error, or “ Error” Type I Error, or “ Error” Types of Errors H0 true H1 true Type II Error, or “ Error” Good! (Correct!) we accept H0 Type I Error, or “ Error” Good! (Correct) we reject H0 Type I error rate will be controlled at a given level, called significance level or a level
Z Test For normal populations or large samples (n > 30) And the computed value of Z is denoted by Z*.
Types of Tests
Types of Tests
Types of Tests
Power of Test Example 5.7 revisit (p. 240)
Sample Size for Testing m The type I, II error rates are controlled at a, b respectively and the maximum tolerable error is D: One-tailed tests: Two-tailed tests:
P-value (Observed Significant Level) p-value is the probability of seeing what we observe as far as (or further) from Ho (in the direction of H1) given Ho is true; the smallest a level to reject Ho p-value is computed by assuming Ho is true and then determining the probability of a result as extreme (or more extreme) as the observed test statistic in the direction of the H1. The smaller p-value is, the less likely that what we observe will occur under the assumption Ho is true. Smaller p-value means stronger evidence against Ho.
Computing the p-Value for the Z-Test
Computing the p-Value for the Z-Test
Computing the p-Value for the Z-Test P-value = P(|Z| > |z*| )= 2 x P(Z > |z*|)
Hypothesis Test using p-Value Set up the null (Ho) and alternative (H1) hypotheses Find an appropriate test statistic (T.S.) Find the p-value Reject Ho if the p-value < a Report the result in the context of the situation
Example 5.7 (Page 238) Redo it using the p-value way
t Test For normal populations with unknown s t = the same formula for Z but replacing s by s Eg. Revisit Example 5.7
One Population
Inferences about mean when “beyond the scope” When population is nonnormal and n is small, how to do inferences about m: 1). Use Bootstrap methods to simulate the sampling distribution of t test statistic 2). Use the simulated distribution to find an (approximate) C.I. and p-value
Introduction to Bootstrap Methods How to simulate the sampling distribution of a given statistic, say t, based on a given sample of size n: Pretend the original sample is the entire population Select a random sample of size n from the original sample (now the population) with replacement ; this is called a bootstrap sample Calculate the t value of the bootstrap sample, t* Repeat steps 2, 3 many times, 1000 or more, say B times. Use the obtained t* values to obtain an approximation to the sampling distribution Minitab steps for obtaining bootstrap samples (p. 264) Example 5.18, 5.19 (p. 261-263)