Laws of Logic and Rules of Evidence Larry Knop Hamilton College.

Slides:



Advertisements
Similar presentations
Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
Advertisements

Testing a Claim about a Proportion Assumptions 1.The sample was a simple random sample 2.The conditions for a binomial distribution are satisfied 3.Both.
STATISTICAL INFERENCE PART V
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Comparing Two Population Means The Two-Sample T-Test and T-Interval.
Chapter 9: Inferences for Two –Samples
Testing means, part III The two-sample t-test. Sample Null hypothesis The population mean is equal to  o One-sample t-test Test statistic Null distribution.
Hypothesis Testing Using a Single Sample
© 2010 Pearson Prentice Hall. All rights reserved Single Factor ANOVA.
ONE SAMPLE t-TEST FOR THE MEAN OF THE NORMAL DISTRIBUTION Let sample from N(μ, σ), μ and σ unknown, estimate σ using s. Let significance level =α. STEP.
Stat 301- Day 32 More on two-sample t- procedures.
Today Today: Chapter 10 Sections from Chapter 10: Recommended Questions: 10.1, 10.2, 10-8, 10-10, 10.17,
BCOR 1020 Business Statistics
HW: See Web Project proposal due next Thursday. See web for more detail.
Lecture 9: One Way ANOVA Between Subjects
Hypothesis testing for the mean [A] One population that follows a normal distribution H 0 :  =  0 vs H 1 :    0 Suppose that we collect independent.
T-Tests Lecture: Nov. 6, 2002.
Chapter 11: Inference for Distributions
Stat 301 – Day 26 Prediction Intervals Paired t-test.
Lecture 6. Hypothesis tests for the population mean  Similar arguments to those used to develop the idea of a confidence interval allow us to test the.
5-3 Inference on the Means of Two Populations, Variances Unknown
Hypothesis Testing: Two Sample Test for Means and Proportions
Chapter 9 Hypothesis Testing II. Chapter Outline  Introduction  Hypothesis Testing with Sample Means (Large Samples)  Hypothesis Testing with Sample.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
Hypothesis testing – mean differences between populations
Chapter 9.3 (323) A Test of the Mean of a Normal Distribution: Population Variance Unknown Given a random sample of n observations from a normal population.
Education 793 Class Notes T-tests 29 October 2003.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 2 – Slide 1 of 25 Chapter 11 Section 2 Inference about Two Means: Independent.
- Interfering factors in the comparison of two sample means using unpaired samples may inflate the pooled estimate of variance of test results. - It is.
Dependent Samples: Hypothesis Test For Hypothesis tests for dependent samples, we 1.list the pairs of data in 2 columns (or rows), 2.take the difference.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Statistical Inferences Based on Two Samples Chapter 9.
STATISTICAL INFERENCE PART VII
Statistical Power The ability to find a difference when one really exists.
Comparing Two Population Means
T tests comparing two means t tests comparing two means.
Two Sample Tests Nutan S. Mishra Department of Mathematics and Statistics University of South Alabama.
One Sample Inf-1 If sample came from a normal distribution, t has a t-distribution with n-1 degrees of freedom. 1)Symmetric about 0. 2)Looks like a standard.
Week 111 Power of the t-test - Example In a metropolitan area, the concentration of cadmium (Cd) in leaf lettuce was measured in 7 representative gardens.
Chapter 10 Inferences from Two Samples
Sociology 5811: Lecture 11: T-Tests for Difference in Means Copyright © 2005 by Evan Schofer Do not copy or distribute without permission.
Tests of Hypotheses Involving Two Populations Tests for the Differences of Means Comparison of two means: and The method of comparison depends on.
Section 8-5 Testing a Claim about a Mean: σ Not Known.
I271B The t distribution and the independent sample t-test.
Week111 The t distribution Suppose that a SRS of size n is drawn from a N(μ, σ) population. Then the one sample t statistic has a t distribution with n.
to accompany Introduction to Business Statistics
The t-distribution William Gosset lived from 1876 to 1937 Gosset invented the t -test to handle small samples for quality control in brewing. He wrote.
- We have samples for each of two conditions. We provide an answer for “Are the two sample means significantly different from each other, or could both.
ISMT253a Tutorial 1 By Kris PAN Skewness:  a measure of the asymmetry of the probability distribution of a real-valued random variable 
Confidence Intervals for a Population Mean, Standard Deviation Unknown.
Copyright © 1998, Triola, Elementary Statistics Addison Wesley Longman 1 Assumptions 1) Sample is large (n > 30) a) Central limit theorem applies b) Can.
AP Statistics.  If our data comes from a simple random sample (SRS) and the sample size is sufficiently large, then we know that the sampling distribution.
Section 8-6 Testing a Claim about a Standard Deviation or Variance.
© 2010 Pearson Prentice Hall. All rights reserved Chapter Hypothesis Tests Regarding a Parameter 10.
Homework and project proposal due next Thursday. Please read chapter 10 too…
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Difference Between Two Means.
Chapter 9 Lecture 3 Section: 9.3. We will now consider methods for using sample data from two independent samples to test hypotheses made about two population.
Chapter 7 Inference Concerning Populations (Numeric Responses)
AP Statistics Chapter 11 Section 2. TestConfidence IntervalFormulasAssumptions 1-sample z-test mean SRS Normal pop. Or large n (n>40) Know 1-sample t-test.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Hypothesis Tests Regarding a Parameter 10.
The 2 nd to last topic this year!!.  ANOVA Testing is similar to a “two sample t- test except” that it compares more than two samples to one another.
Class Six Turn In: Chapter 15: 30, 32, 38, 44, 48, 50 Chapter 17: 28, 38, 44 For Class Seven: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 Read.
Independent Samples: Comparing Means Lecture 39 Section 11.4 Fri, Apr 1, 2005.
Testing the Difference Between Two Means
Math 4030 – 10a Tests for Population Mean(s)
Chapter 8 Hypothesis Testing with Two Samples.
CHAPTER 21: Comparing Two Means
Chapter 9 Hypothesis Testing.
Independent Samples: Comparing Means
Presentation transcript:

Laws of Logic and Rules of Evidence Larry Knop Hamilton College

Transitivity of Order For any real numbers a, b, c, if a < b and b < c then a < c. Transitivity is an implication. We must know a < b and b < c in order to conclude b < c. In mathematics: No problem. We know numbers. In life: There’s a problem. 

In life we don’t know anything for certain. To find the value of a number a, we must take measurements and determine a as best we can from the evidence. Transitivity in the real world: Suppose the evidence shows a < b and the evidence shows b < c. Can we conclude, based on the evidence, that a < c?

Background Let  F be the mean height of HC females. Let  M be the mean height of HC males. Conjecture:  F <  M. Evidence: Take a sample of HC females and measure each subject’s height. Data: f 1, f 2, …, f n. Then take a sample of HC males and measure each subject’s height. Data: m 1, m 2, …, m p.

To investigate our conjecture we will test H o :  F =  M (or  M –  F =  ) versus H a :  F  ) Test statistic: Burning question: How likely is it to get a value of as extreme as what we observed, or more so, if  F =  M ?

Goal: Calculate probabilities for. To do so, we need a probability distribution. Starters: Assume each f i is normally distributed with mean  F and standard deviation  F. Assume each m j is normally distributed with mean  M and standard deviation  M. Further, assume we were reasonably intelligent as to how we chose our subjects, so that the measurements are independent.

Under our assumptions the random variable has a standard normal distribution. Unfortunately there is a problem. We don’t know  F or  M. Solution? We can replace the population standard deviations  F and  M by the sample standard deviations s F and s M.

Under our assumptions the random variable has a distribution. The distribution of the random variable can be approximated by a t-distribution, if you don’t mind fractional degrees of freedom. Or, it can be approximated by a standard t-distribution with integer degrees of freedom if you don’t mind a less than optimal approximation. 

Alternatively, we can assume a common variance, so  M =  F =  C. Then has a standard normal distribution. The common s.d. can be approximated by a pooled average of the two sample s.d.

With the common variance assumption the random variable has a t-distribution with n + p – 2 df. All is sweetness and light – except The assumption of a common variance raises a conflict with transitivity.

Example n = 4 = 3.86 s a = n = 100 = s b = n = 4 = 5.96 s c = 1.53 The following are selected random samples generated by Minitab from a normal distribution with a common variance of 1.

Test H o : a = b T-Test of difference = 0 (vs <): T-Value = P-Value = DF = 102 Both use Pooled StDev = Reject H o. Evidence supports the claim a < b. Test H o : b = c T-Test of difference = 0 (vs <): T-Value = P-Value = DF = 102 Both use Pooled StDev = Reject H o. Evidence supports the claim b < c. Test H o : a = c T-Test of difference = 0 (vs <): T-Value = P-Value = DF = 6 Both use Pooled StDev = Do NOT reject H o. Evidence is not strong enough to reject a = c.

The breaking of transitivity comes from the pooling of the standard deviations. The standard deviation is a measure of how well we know the location of a quantity. If we know one quantity well (small s.d. and large n) then the common variance assumption carries that knowledge over to the difference – even though knowledge of the second part of the difference is much less precise. In the example we know b, the middle quantity, very well while our knowledge of a and of c is much less precise.

Comparing a and b: the estimated difference is = with a pooled s.d. of For the given sample sizes, the difference is statistically significant. The comparison for b and c is similar. Comparing a and c: the estimated difference is = 2.10 with a pooled s.d. of Even though the estimated difference is larger, the pooled s.d. is also much larger and the sample sizes are both small. Consequently the difference is not significantly different from 0.

So much for real world transitivity. There is a logic to the rules of evidence, but the logic is not quite as simple as the logic of mathematics. It should be noted that ANOVA, the ANalysis Of VAriance, applies to the comparison of n quantities – and is based on the assumption of a common variance, which leads to some interesting outcomes.