Analysis of Financial Data Spring 2012 Lecture 2: Statistical Inference 2 Priyantha Wijayatunga, Department of Statistics, Umeå University

Slides:



Advertisements
Similar presentations
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
Advertisements

CHAPTER 25: One-Way Analysis of Variance Comparing Several Means
Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample.
Objectives (BPS chapter 18) Inference about a Population Mean  Conditions for inference  The t distribution  The one-sample t confidence interval 
Inference for a population mean BPS chapter 18 © 2006 W. H. Freeman and Company.
Inference for distributions: - for the mean of a population IPS chapter 7.1 © 2006 W.H Freeman and Company.
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Significance Testing Chapter 13 Victor Katch Kinesiology.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
6/18/2015Two Sample Problems1 Chapter 18 Two Sample Problems.
6/22/2015Two Sample Problems1 Chapter 19 Two Sample Problems.
Inference for Distributions - for the Mean of a Population
Chapter 11: Inference for Distributions
Inferences About Process Quality
Chapter 9 Hypothesis Testing.
Two-sample problems for population means BPS chapter 19 © 2006 W.H. Freeman and Company.
7.1 Lecture 10/29.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 9 Hypothesis Testing.
Overview Definition Hypothesis
Inferences Based on Two Samples
Inference in practice BPS chapter 16 © 2006 W.H. Freeman and Company.
Objectives 6.2 Tests of significance
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 : Independent Samples Inference for  1  2 1.
Comparing Two Population Means
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for  1  Independent Samples.
IPS Chapter 7 © 2012 W.H. Freeman and Company  7.1: Inference for the Mean of a Population  7.2: Comparing Two Means  7.3: Optional Topics in Comparing.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
Lecture 5 Two population tests of Means and Proportions.
1 Happiness comes not from material wealth but less desire.
Business Statistics for Managerial Decision Comparing two Population Means.
Inference for a population mean BPS chapter 18 © 2006 W.H. Freeman and Company.
Introduction to inference Use and abuse of tests; power and decision IPS chapters 6.3 and 6.4 © 2006 W.H. Freeman and Company.
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
10.2 Tests of Significance Use confidence intervals when the goal is to estimate the population parameter If the goal is to.
Inference for a population mean BPS chapter 16 © 2006 W.H. Freeman and Company.
Two sample problems:  compare the responses in two groups  each group is a sample from a distinct population  responses in each group are independent.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
Objectives (BPS chapter 19) Comparing two population means  Two-sample t procedures  Examples of two-sample t procedures  Using technology  Robustness.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 1.
Chapter 8 Delving Into The Use of Inference 8.1 Estimating with Confidence 8.2 Use and Abuse of Tests.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 24 Independent Samples Chapter.
+ Unit 6: Comparing Two Populations or Groups Section 10.2 Comparing Two Means.
Learning Objectives After this section, you should be able to: The Practice of Statistics, 5 th Edition1 DESCRIBE the shape, center, and spread of the.
Inference for Distributions 7.2 Comparing Two Means © 2012 W.H. Freeman and Company.
Inference for distributions: - Comparing two means.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for the difference Between.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 6: Tests of significance / tests of hypothesis.
When  is unknown  The sample standard deviation s provides an estimate of the population standard deviation .  Larger samples give more reliable estimates.
Objectives (PSLS Chapter 18) Comparing two means (σ unknown)  Two-sample situations  t-distribution for two independent samples  Two-sample t test 
Chapter 9 Introduction to the t Statistic
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 22 Part 2 CI HT for m 1 -
10/31/ Comparing Two Means. Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in.
Inference for Distributions Inference for the Mean of a Population PBS Chapter 7.1 © 2009 W.H Freeman and Company.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 7: Tests of significance and confidence.
Inference for distributions: - for the mean of a population IPS chapter 7.1 © 2006 W.H Freeman and Company.
Introduction to inference Use and abuse of tests; power and decision
Objectives (PSLS Chapter 18)
Chapter 23 CI HT for m1 - m2: Paired Samples
18. Two-sample problems for population means (σ unknown)
The Practice of Statistics in the Life Sciences Fourth Edition
Inference for distributions: - Optional topics in comparing distributions IPS chapter 7.3 © 2006 W.H. Freeman and Company.
Inference for Distributions 7.2 Comparing Two Means
Objectives (Section 7.2) Two sample problems:
Objectives 7.1 Inference for the mean of a population
Presentation transcript:

Analysis of Financial Data Spring 2012 Lecture 2: Statistical Inference 2 Priyantha Wijayatunga, Department of Statistics, Umeå University These materials are altered ones from copyrighted lecture slides (© 2009 W.H. Freeman and Company) from the homepage of the book: The Practice of Business Statistics Using Data for Decisions :Second Edition by Moore, McCabe, Duckworth and Alwan.

Using significance tests: Power and inference as a decision  How small a P-value?  Statistical significance vs. practical significance  Cautions for tests  The power of a test  Type I and II errors

How small a P is convincing? Factors often considered:  How plausible is H 0 ? If it represents an assumption people have believed for years, strong evidence (small P) is needed to persuade them.  What are the consequences of rejecting the null hypothesis (e.g., global warming, convicting a person for life with DNA evidence)?  Are you conducting a preliminary study? If so, you may want a larger α so that you will be less likely to miss an interesting result. Using significance tests

Some conventions:  We typically use the standards of our field of work.  There is no sharp border between “significant” and “insignificant,” only increasing strong evidence as the P-value decreases.  There is no practical distinction between the P-values and  It makes no sense to treat  = 0.05 as a universal rule for what is significant.

Statistical significance and practical significance Statistical significance only says whether the effect observed is likely to be due to chance alone because of random sampling. Statistical significance may not be practically important. That’s because statistical significance doesn’t tell you about the magnitude of the effect, only that there is one. An effect could be too small to be relevant. And with a large enough sample size, significance can be reached even for the tiniest effect.  A drug to lower temperature is found to reproducibly lower patient temperature by 0.4°Celsius (P-value < 0.01). But clinical benefits of temperature reduction only appear for a 1° decrease or larger.

More cautions  Statistical inference is not valid for all sets of data.  Ask how the data were produced  Simple random sample?  Randomized experiment?  Do the data behave like independent observations on a Normal distribution?  Beware of searching for significance

Interpreting lack of significance  Consider this provocative title from the British Medical Journal: “Absence of evidence is not evidence of absence”.  Having no proof of whom committed a murder does not imply that the murder was not committed. Indeed, failing to find statistical significance in results is not rejecting the null hypothesis. This is very different from actually accepting it. The sample size, for instance, could be too small to overcome large variability in the population. When comparing two populations, lack of significance does not imply that the two samples come from the same population. They could represent two very distinct populations with the similar mathematical properties.

Interpreting effect size: It’s all about context There is no consensus on how big an effect has to be in order to be considered meaningful. In some cases, effects that may appear to be trivial can in reality be very important.  Example: Improving the format of a computerized test reduces the average response time by about 2 seconds. Although this effect is small, it is important since this is done millions of times a year. The cumulative time savings of using the better format is gigantic. Always think about the context. Try to plot your results, and compare them with a baseline or results from similar studies.

The power of a test The power of a test of hypothesis with fixed significance level α is the probability that the test will reject the null hypothesis when the alternative is true. In other words, power is the probability that the data gathered in an experiment will be sufficient to reject a wrong null hypothesis. Knowing the power of your test is important:  When designing your experiment: to select a sample size large enough to detect an effect of a magnitude you think is meaningful.  When a test found no significance: Check that your test would have had enough power to detect an effect of a magnitude you think is meaningful.

Test of hypothesis at significance level α 5%: H 0 : µ = 0 versus H a : µ > 0 Can an exercise program increase bone density? From previous studies, we assume that σ = 2 for the percent change in bone density and would consider a percent increase of 1 medically important. Is 25 subjects a large enough sample for this project? A significance level of 5% implies a lower tail of 95% and z = Thus: All sample averages larger than will result in rejecting the null hypothesis.

What if the null hypothesis is wrong and the true population mean is 1? The power against the alternative µ = 1% is the probability that H 0 will be rejected when in fact µ = 1%. We expect that a sample size of 25 would yield a power of 80%. A test power of 80% or more is considered good statistical practice.

Increasing the power Increase α. More conservative significance levels (lower α) yield lower power. Thus, using an α of.01 will result in lower power than using an α of.05. The size of the effect is an important factor in determining power. Larger effects are easier to detect. Increasing the sample size decreases the spread of the sampling distribution and therefore increases power. But there is a tradeoff between gain in power and the time and cost of testing a larger sample. Decrease σ. A larger variance σ 2 implies a larger spread of the sampling distribution, σ/sqrt(n). Thus, the larger the variance, the lower the power. The variance is in part a property of the population, but it is possible to reduce it to some extent by carefully designing your study.

Type I and II errors  A Type I error is made when we reject the null hypothesis and the null hypothesis is actually true (incorrectly reject a true H 0 ). The probability of making a Type I error is the significance level .  A Type II error is made when we fail to reject the null hypothesis and the null hypothesis is false (incorrectly keep a false H 0 ). The probability of making a Type II error is labeled . The power of a test is 1 − .

Running a test of significance is a balancing act between the chance α of making a Type I error and the chance  of making a Type II error. Reducing α reduces the power of a test and thus increases . It might be tempting to emphasize greater power (the more the better).  However, with “too much power” trivial effects become highly significant.  A type II error is not definitive since a failure to reject the null hypothesis does not imply that the null hypothesis is wrong.

Inference for the mean of a population  The t distributions  The one-sample t confidence interval  The one-sample t test  Matched pairs t procedures  Robustness  Power of the t test

Recall: The t distributions Suppose that an SRS of size n is drawn from an N(µ, σ) population.  When  is known, the sampling distribution of is N(  /√n). Therefore has N(  ).  When  is unknown it is estimated by the sample standard deviation s, Then the one-sample t statistic has a t distribution with degrees of freedom n − 1.

The one-sample t-test A test of hypotheses requires a few steps: 1.Stating the null and alternative hypotheses (H 0 versus H a ) 2.Deciding on a one-sided or two-sided test 3.Choosing a significance level  4.Calculating t and its degrees of freedom 5.Finding the area under the curve with Table D 6.Stating the P-value and interpreting the result

One-sided (one-tailed) Two-sided (two-tailed) The P-value is the probability, if H 0 is true, of randomly drawing a sample like the one obtained or more extreme, in the direction of H a. The P-value is calculated as the corresponding area under the curve, one-tailed or two-tailed depending on H a :

Table D How to: For df = 9 we only look into the corresponding row. For a one-sided H a, this is the P-value (between 0.01 and 0.02); for a two-sided H a, the P-value is doubled (between 0.02 and 0.04) < t = 2.7 < thus 0.02 > upper tail p > 0.01 The calculated value of t is 2.7. We find the 2 closest t values.

Sweetening colas (continued) Is there evidence that storage results in sweetness loss for the new cola recipe at the 0.05 level of significance (  = 5%)? H 0 :  = 0 versus H a :  > 0 (one-sided test)  p > p <  thus the result is significant. The t-test has a significant p-value. We reject H 0. There is a significant loss of sweetness, on average, following storage. Taster Sweetness loss ___________________________ Average 1.02 Standard deviation Degrees of freedom n − 1 = 9

Matched pairs t procedures Sometimes we want to compare treatments or conditions at the individual level. These situations produce two samples that are not independent — they are related to each other. The members of one sample are identical to, or matched (paired) with, the members of the other sample.  Example: Pre-test and post-test studies look at data collected on the same sample elements before and after some experiment is performed.  Example: Twin studies often try to sort out the influence of genetic factors by comparing a variable between sets of twins.  Example: Using people matched for age, sex, and education in social studies allows canceling out the effect of these potential lurking variables.

In these cases, we use the paired data to test the difference in the two population means. The variable studied becomes X difference = (X 1 − X 2 ), and H 0 : µ difference = 0 ; H a : µ difference >0 (or <0, or ≠0) Conceptually, this is not different from tests on one population.

Sweetening colas (revisited) The sweetness loss due to storage was evaluated by 10 professional tasters (comparing the sweetness before and after storage): Taster Sweetness loss      5 −0.4   7 −1.3    We want to test if storage results in a loss of sweetness, thus: H 0 :  = 0 versus H a :  > 0 Although the text didn’t mention it explicitly, this is a pre-/post-test design and the variable is the difference in cola sweetness before minus after storage. A matched pairs test of significance is indeed just like a one-sample test.

Does lack of caffeine increase depression? Individuals diagnosed as caffeine-dependent are deprived of caffeine-rich foods and assigned to receive daily pills. Sometimes, the pills contain caffeine and other times they contain a placebo. Depression was assessed.  There are 2 data points for each subject, but we’ll only look at the difference.  The sample distribution appears appropriate for a t-test. 11 “difference” data points.

Does lack of caffeine increase depression? For each individual in the sample, we have calculated a difference in depression score (placebo minus caffeine). There were 11 “difference” points, thus df = n − 1 = 10. We calculate that = 7.36; s = 6.92 H 0 :  difference = 0 ; H 0 :  difference > 0 For df = 10, p > Caffeine deprivation causes a significant increase in depression.

Robustness The t procedures are exactly correct when the population is distributed exactly normally. However, most real data are not exactly normal. The t procedures are robust to small deviations from normality – the results will not be affected too much. Factors that strongly matter:  Random sampling. The sample must be an SRS from the population.  Outliers and skewness. They strongly influence the mean and therefore the t procedures. However, their impact diminishes as the sample size gets larger because of the Central Limit Theorem. Specifically:  When n < 15, the data must be close to normal and without outliers.  When 15 > n > 40, mild skewness is acceptable but not outliers.  When n > 40, the t-statistic will be valid even with strong skewness.

Recall: Power of a test α = P[type I error] = P[rejecting H 0 when it is true] β = P[type II error] = P[accepting H 0 when it is false] The power of a test against a specific alternative value of the population mean µ assuming a fixed significance level α is the probability that the test will reject the null hypothesis when the alternative is true. 1- β = Power= P[rejecting H 0 when it is false]

Power of the t-test The power of the one sample t-test against a specific alternative value of the population mean µ assuming a fixed significance level α is the probability that the test will reject the null hypothesis when the alternative is true. Calculation of the exact power of the t-test is a bit complex. But an approximate calculation that acts as if σ were known is almost always adequate for planning a study. This calculation is very much like that for the z-test. When guessing σ, it is always better to err on the side of a standard deviation that is a little larger rather than smaller. We want to avoid a failing to find an effect because we did not have enough data.

Does lack of caffeine increase depression? Suppose that we wanted to perform a similar study but using subjects who regularly drink caffeinated tea instead of coffee. For each individual in the sample, we will calculate a difference in depression score (placebo minus caffeine). How many patients should we include in our new study? In the previous study, we found that the average difference in depression level was 7.36 and the standard deviation We will use µ = 3.0 as the alternative of interest. We are confident that the effect was larger than this in our previous study, and this amount of an increase in depression would still be considered important. We will use s = 7.0 for our guessed standard deviation. We can choose a one-sided alternative because, like in the previous study, we would expect caffeine deprivation to have negative psychological effects.

Does lack of caffeine increase depression? How many subjects should we include in our new study? Would 16 subjects be enough? Let’s compute the power of the t-test for against the alternative µ = 3. For a significance level α= 5%, the t-test with n observations rejects H 0 if t exceeds the upper 5% significance point of t(df:15) = For n = 16 and s = 7: H 0 :  difference = 0 ; H a :  difference > 0 The power for n = 16 would be the probability that ≥ when µ = 3, using σ = 7. Since we have σ, we can use the normal distribution here: The power would be about 86%.

Comparing two means  Two-sample z statistic  Two-sample t procedures  Two sample t test  Two-sample t confidence interval  Robustness  Details of the two sample t procedures

Two Sample Problems Which is it? We often compare two treatments used on independent samples. Is the difference between both treatments due only to variations from the random sampling as in (B), or does it reflect a true difference in population means as in (A)? Independent samples: Subjects in one sample are completely unrelated to subjects in the other sample. Population 1 Sample 1 Population 2 Sample 2 (A) Population Sample 2 Sample 1 (B)

Two-sample z statistic We have two independent SRSs (simple random samples) coming maybe from two distinct populations with (     ) and (     ). We use 1 and 2 to estimate the unknown   and  . When both populations are normal, the sampling distribution of ( 1 − 2 ) is also normal, with standard deviation : Then the two-sample z statistic has the standard normal N(0, 1) sampling distribution.

Two-sample z statistic: confidence interval (1-α)100% confidence interval for unknown   -   when the variances of two populations σ 1 and σ 2 are known

Two independent samples t distribution We have two independent SRSs (simple random samples) coming maybe from two distinct populations with (     ) and (     ) unknown. We use ( 1,s 1 ) and ( 2,s 2 ) to estimate (     ) and (     ) respectively. To compare the means, both populations should be normally distributed. However, in practice, it is enough that the two distributions have similar shapes and that the sample data contain no strong outliers.

df 1-21-2 The two-sample t statistic follows approximately the t distribution with a standard error SE (spread) reflecting variation from both samples: Conservatively, the degrees of freedom is equal to the smallest of (n 1 − 1, n 2 − 1).

Two-sample t-test The null hypothesis is that both population means   and   are equal, thus their difference is equal to zero. H 0 :   =     −    with either a one-sided or a two-sided alternative hypothesis. We find how many standard errors (SE) away from (   −   ) is ( 1 − 2 ) by standardizing with t: Because in a two-sample test H 0 poses (   −    0, we simply use With df = smallest(n 1 − 1, n 2 − 1)

Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in milliliters) of air that an individual can exhale in 6 seconds. FVC was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking. We want to know whether parental smoking decreases children’s lung capacity as measured by the FVC test. Is the mean FVC lower in the population of children exposed to parental smoking? Parental smokingFVCsn Yes No

Parental smokingFVCsn Yes No The difference in sample averages follows approximately the t distribution: We calculate the t statistic: In table D, for df 29 we find: |t| > => p < (one sided) It’s a very significant difference, we reject H 0. H 0 :  smoke =  no (  smoke −  no ) = 0 H a :  smoke (  smoke −  no ) < 0 (one sided) Lung capacity is significantly impaired in children of smoking parents.

Two sample t-confidence interval Because we have two independent samples we use the difference between both sample averages ( 1 − 2 ) to estimate (   −   ). C t*t*−t*−t* m Practical use of t: t*  C is the area between −t* and t*.  We find t* in the line of Table D for df = smallest (n 1 −1; n 2 −1) and the column for confidence level C.  The margin of error m is:

Two-sample t statistic: confidence interval (1-α)100% confidence interval for unknown   -   when the variances of two populations σ 1 and σ 2 are unknown The degrees of freedom for t–distribution is smallest(n 1 -1, n 2 -1)

Common mistake!!! A common mistake is to calculate a one-sample confidence interval for   and then check whether   falls within that confidence interval or vice- versa. This is WRONG because the variability in the sampling distribution for two independent samples is more complex and must take into account variability coming from both samples. Hence the more complex formula for the standard error.

Degree of Reading Power (DRP): Can directed reading activities in the classroom help improve reading ability? A class of 21 third-graders participates in these activities for 8 weeks while a control class of 23 third-graders follows the same curriculum without the activities. After 8 weeks, all take a DRP test. 95% confidence interval for (µ 1 − µ 2 ), with df = 20 conservatively  t* = 2.086: With 95% confidence, (µ 1 − µ 2 ), falls within 9.96 ± 8.99 or 1.0 to 18.9.

Robustness The two-sample t procedures are more robust than the one-sample t procedures. They are the most robust when both sample sizes are equal and both sample distributions are similar. But even when we deviate from this, two-sample tests tend to remain quite robust.  When planning a two-sample study, choose equal sample sizes if you can. As a guideline, a combined sample size (n 1 + n 2 ) of 40 or more will allow you to work even with the most skewed distributions.

Details of the two sample t procedures The true value of the degrees of freedom for a two-sample t- distribution is quite lengthy to calculate. That’s why we use an approximate value, df = smallest(n 1 − 1, n 2 − 1), which errs on the conservative side (often smaller than the exact). Computer software, though, gives the exact degrees of freedom—or the rounded value—for your sample data.

SPSS Excel Table C 95% confidence interval for the reading ability study using the more precise degrees of freedom: t*t*

Pooled two-sample procedures There are two versions of the two-sample t-test: one assuming equal variance (“pooled 2-sample test”) and one not assuming equal variance (“unequal” variance) for the two populations. They have slightly different formulas and degrees of freedom. Two normally distributed populations with unequal variances The pooled (equal variance) two- sample t-test was often used before computers because it has exactly the t distribution for degrees of freedom n 1 + n 2 − 2. However, the assumption of equal variance is hard to check, and thus the unequal variance test is safer.

When both populations have the same standard deviation, the pooled estimator of σ 2 is: (i. e, σ 1 =σ 2 ) The sampling distribution has exactly the t distribution with (n 1 + n 2 − 2) degrees of freedom. A level C confidence interval for µ 1 − µ 2 is (with area C between −t* and t*) To test the hypothesis H 0 : µ 1 = µ 2 against a one-sided or a two-sided alternative, compute the pooled two-sample t statistic for the t(n 1 + n 2 − 2) distribution.

Which type of test? One sample, paired samples, two samples?  Comparing vitamin content of bread immediately after baking vs. 3 days later (the same loaves are used on day one and 3 days later).  Comparing vitamin content of bread immediately after baking vs. 3 days later (tests made on independent loaves).  Average fuel efficiency for 2005 vehicles is 21 miles per gallon. Is average fuel efficiency higher in the new generation “green vehicles”?  Is blood pressure altered by use of an oral contraceptive? Comparing a group of women not using an oral contraceptive with a group taking it.  Review insurance records for dollar amount paid after fire damage in houses equipped with a fire extinguisher vs. houses without one. Was there a difference in the average dollar amount paid?

Inference on variances Optional topics in comparing distributions  Chi–squared distribution  Inference for a population variance (spread)  The F distribution  Comparing two population variances

Testing population variance in a normal population Let If σ is unknown we estimate it with sample standard deviation for a given sample of size n. In fact, S 2 is a random variable. The chi –squared statistic Has a chi–squared distribution with degrees of freedom (n-1)

Table F gives upper critical values for many χ 2 distributions. Finding the p-value with table F The χ 2 distributions are a family of distributions that can take only positive values, are skewed to the right, and are described by a specific degrees of freedom.

Chi–squared distribution

Testing population variance in a normal population: Eg A driver is concerned his cars gas mileage (per gallon) and he thinks that the mielage has the variance σ 2 = 23 (mpg) 2. To test if it is really the case, he collects mileage of his car for last 8 months: 28, 25, 29, 25,32, 36, 27, 24 mpg. The chi –squared statistic This valaue is from a chi–squared distribution with degrees of freedom 7. Conduct a hypothesis test to check his thinking at 10% level of significance

Testing population variance in a normal population: Eg Test Level of significance We use only df=7 curve Rejection region: Since we do not reject H 0 at level of significance 10%. x 7

Inference for population spread It is also possible to compare two population standard deviations σ 1 and σ 2 by comparing the standard deviations of two SRSs. However, these procedures are not robust at all against deviations from normality. When s 1 2 and s 2 2 are sample variances from independent SRSs of sizes n 1 and n 2 drawn from normal populations, the F statistic F = s 1 2 / s 2 2 has the F distribution with n 1 − 1 and n 2 − 1 degrees of freedom when H 0 : σ 1 = σ 2 is true.

The F distributions are right-skewed and cannot take negative values.  The peak of the F density curve is near 1 when both population standard deviations are equal.  Values of F far from 1 in either direction provide evidence against the hypothesis of equal standard deviations. Table E in the back of the book gives critical F-values for upper p of 0.10, 0.05, 0.025, 0.01, and We compare the F statistic calculated from our data set with these critical values for a one-side alternative; the p-value is doubled for a two-sided alternative.

df num = n 1 − 1 df den = n 2 − 1 F p Table E

Does parental smoking damage the lungs of children? Forced vital capacity (FVC) was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking sided p >  0.02 > 2-sided p > Parental smokingFVCsn Yes No The degrees of freedom are 29 and 29, which can be rounded to the closest values in Table E: 30 for the numerator and 25 for the denominator. H 0 : σ 2 smoke = σ 2 no H a : σ 2 smoke ≠ σ 2 no (two sided)