Don’t spam class lists!!!. Farshad has prepared a suggested format for you final project. It will be on the web

Slides:



Advertisements
Similar presentations
Chapter 10: The t Test For Two Independent Samples
Advertisements

Chapter 12: Testing hypotheses about single means (z and t) Example: Suppose you have the hypothesis that UW undergrads have higher than the average IQ.
Chapter 14, part D Statistical Significance. IV. Model Assumptions The error term is a normally distributed random variable and The variance of  is constant.
Review of the Basic Logic of NHST Significance tests are used to accept or reject the null hypothesis. This is done by studying the sampling distribution.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Testing Differences Among Several Sample Means Multiple t Tests vs. Analysis of Variance.
Single Sample t-test Purpose: Compare a sample mean to a hypothesized population mean. Design: One group.
1 Matched Samples The paired t test. 2 Sometimes in a statistical setting we will have information about the same person at different points in time.
PSY 307 – Statistics for the Behavioral Sciences
t scores and confidence intervals using the t distribution
The t-test:. Answers the question: is the difference between the two conditions in my experiment "real" or due to chance? Two versions: (a) “Dependent-means.
Inferential Stats for Two-Group Designs. Inferential Statistics Used to infer conclusions about the population based on data collected from sample Do.
Independent Samples and Paired Samples t-tests PSY440 June 24, 2008.
Confidence intervals using the t distribution. Chapter 6 t scores as estimates of z scores; t curves as approximations of z curves Estimated standard.
Overview of Lecture Parametric Analysis is used for
T-Tests Lecture: Nov. 6, 2002.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
The Z statistic Where The Z statistic Where The Z statistic Where.
Major Points Formal Tests of Mean Differences Review of Concepts: Means, Standard Deviations, Standard Errors, Type I errors New Concepts: One and Two.
Hypothesis Testing Using The One-Sample t-Test
Hypothesis Testing: Two Sample Test for Means and Proportions
Chapter 9: Introduction to the t statistic
Inferential Statistics
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
Chapter Ten Introduction to Hypothesis Testing. Copyright © Houghton Mifflin Company. All rights reserved.Chapter New Statistical Notation The.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
Overview of Statistical Hypothesis Testing: The z-Test
Section #4 October 30 th Old: Review the Midterm & old concepts 1.New: Case II t-Tests (Chapter 11)
Statistical Analysis Statistical Analysis
T-test Mechanics. Z-score If we know the population mean and standard deviation, for any value of X we can compute a z-score Z-score tells us how far.
CORRELATION & REGRESSION
The Hypothesis of Difference Chapter 10. Sampling Distribution of Differences Use a Sampling Distribution of Differences when we want to examine a hypothesis.
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
Hypothesis Testing CSCE 587.
One-sample In the previous cases we had one sample and were comparing its mean to a hypothesized population mean However in many situations we will use.
T-TEST Statistics The t test is used to compare to groups to answer the differential research questions. Its values determines the difference by comparing.
Hypothesis Testing Using the Two-Sample t-Test
Exam Exam starts two weeks from today. Amusing Statistics Use what you know about normal distributions to evaluate this finding: The study, published.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
T- and Z-Tests for Hypotheses about the Difference between Two Subsamples.
Testing Hypotheses about Differences among Several Means.
Essential Question:  How do scientists use statistical analyses to draw meaningful conclusions from experimental results?
Warsaw Summer School 2011, OSU Study Abroad Program Difference Between Means.
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
© Copyright McGraw-Hill 2000
Two-Sample Hypothesis Testing. Suppose you want to know if two populations have the same mean or, equivalently, if the difference between the population.
Psych 230 Psychological Measurement and Statistics
Chapter 8 Parameter Estimates and Hypothesis Testing.
Stats Lunch: Day 3 The Basis of Hypothesis Testing w/ Parametric Statistics.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Copyright © 2010 Pearson Education, Inc. Warm Up- Good Morning! If all the values of a data set are the same, all of the following must equal zero except.
Inferential Statistics Inferential statistics allow us to infer the characteristic(s) of a population from sample data Slightly different terms and symbols.
Inferences Concerning Variances
Hypothesis test flow chart frequency data Measurement scale number of variables 1 basic χ 2 test (19.5) Table I χ 2 test for independence (19.9) Table.
Tuesday, September 24, 2013 Independent samples t-test.
Other Types of t-tests Recapitulation Recapitulation 1. Still dealing with random samples. 2. However, they are partitioned into two subsamples. 3. Interest.
Copyright © Cengage Learning. All rights reserved. 9 Inferences Based on Two Samples.
Chapter Eleven Performing the One-Sample t-Test and Testing Correlation.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Hypothesis test flow chart
T-tests Chi-square Seminar 7. The previous week… We examined the z-test and one-sample t-test. Psychologists seldom use them, but they are useful to understand.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
Copyright © 2009 Pearson Education, Inc t LEARNING GOAL Understand when it is appropriate to use the Student t distribution rather than the normal.
Chapter 10: The t Test For Two Independent Samples.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 9 Introduction to the t Statistic
When the means of two groups are to be compared (where each group consists of subjects that are not related) then the excel two-sample t-test procedure.
Dependent-Samples t-Test
Presentation transcript:

Don’t spam class lists!!!

Farshad has prepared a suggested format for you final project. It will be on the web

The Z statistic Where

The Z statistic Where

The Z statistic Where

The Z statistic Where

The t Statistic(s) Using an estimated, which we’ll call we can create an estimate of which we’ll call Estimate: and:

The t Statistic(s) Caution: some textbooks and some calculators use the symbol S 2 to represent this estimated population variance

The t Statistic(s) Using, instead of we get a statistic that isn’t from a normal (Z) distribution - it is from a family of distributions called t

The t Statistic(s) What’s the difference between t and Z?

The t Statistic(s) What’s the difference between t and Z? Nothing if n is really large (approaching infinity) –because n-1 and n are almost the same number!

The t Statistic(s) With small values of n, the shape of the t distribution depends on the degrees of freedom (n-1)

The t Statistic(s) With small values of n, the shape of the t distribution depends on the degrees of freedom (n-1) –specifically it is flatter but still symmetric with small n

The t Statistic(s) Since the shape of the t distribution depends on the d.f., the fraction of t scores falling within any given range also depends on d. f.

The t Statistic(s) The Z table isn’t useful (unless n is huge) instead we use a t-table which gives t crit for different degrees of freedom (and both one- and two-tailed tests)

The t Statistic(s) There is a t table on page 142 of your book Look it over - notice how t crit changes with the d.f. and the alpha level

The t Statistic(s) The logic of using this table to test alternative hypothesis against null hypothesis is precisely as with Z scores - in fact, the values in the bottom row are given by the Z table and the familiar +/ appears for alpha =.05 (two- tailed)

An Example You have a theory that drivers in Alberta are illegally speedy

An Example You have a theory that drivers in Alberta are illegally speedy –Prediction: the mean speed on highway 2 between Ft. Mac and Calgary is greater than 110

An Example You have a theory that drivers in Alberta are illegally speedy –Prediction: the mean speed on highway 2 between Ft. Mac and Calgary is greater than 110 Here’s another way to say that: a sample of n drivers on the highway is not a sample from a population of drivers with a mean speed of 110

An Example Set up the problem: –null hypothesis: your sample of drivers on highway 2 are representative of a population with an average speed of 110 km/hr –alternative hypothesis: sample of drivers is from a population with a mean speed greater than 110 thus: in 95% of such samples and

An Example Here are some (fake) data

An Example –t crit for a one-tailed test with 5-1 = 4 d.f. is –Our computed t = 1.59 does not exceed t crit thus we cannot reject the null hypothesis –We conclude there is no evidence to support our hypothesis that drivers are speeding on highway 2 –Does this mean that drivers are not speeding on highway 2?

T-test for one sample mean We’ve discussed how to create and use a t statistic when we want to compare a sample mean to a hypothesized mean

t Tests for Two Sample Means We’re often interested in a more sophisticated and powerful experimental design…

t Tests for Two Sample Means We’re often interested in a more sophisticated and powerful experimental design… Usually we perform some experimental manipulation and look for a change on some score or variable –e.g. before and after taking a drug

t Tests for Two Sample Means We manipulate a variable (eg. drug dose) and we want to know whether some other variable (e.g. fever) depends on our manipulation

t Tests for Two Sample Means We manipulate a variable (eg. drug dose) and we want to know whether some other variable (e.g. fever) depends on our manipulation Let’s introduce some formal terms: –independent variable: the variable that you control –dependent variable: the variable that depends on the experimental manipulation (the one you measure)

t Tests for Two Sample Means Example: Let’s ask whether or not Tylenol reduces fever - there are two ways you could do this…

t Tests for Two Sample Means Example: Let’s ask whether or not Tylenol reduces fever - there are two ways you could do this… 1. Get a bunch of people with fevers, give half of them Tylenol and half of them a placebo and then measure their temperatures

t Tests for Two Sample Means Example: Let’s ask whether or not Tylenol reduces fever - there are two ways you could do this… 1. Get a bunch of people with fevers, give half of them Tylenol and half of them a placebo and then measure their temperatures 2. Get a bunch of people with fevers, measure their temperatures, then give them Tylenol and measure them again

t Tests for Two Sample Means Repeated Measures - an experiment in which the same subject (or object) is measured in two (or more!) conditions

t Tests for Two Sample Means Repeated Measures - an experiment in which the same subject (or object) is measured in two (or more!) conditions The two samples are actually pairs of scores and those pairs are correlated or dependent

t Tests for Two Sample Means Repeated Measures - an experiment in which the same subject (or object) is measured in two (or more!) conditions The two samples are actually pairs of scores and those pairs are correlated or dependent This type of t test is called a test for two dependent sample means (sometimes called a paired t-test)

t Tests for Two Dependent Sample Means When comparing two paired samples we’re often not interested in the absolute scores but we are interested in the differences between scores X 11 X 12. X 1n X 21 X 22. X 2n Sample 1Sample 2Difference X 11 - X 21 X 12 - X 22. X 1n - X 2n This is a sample of differences taken from a population of differences it has a mean and standard deviation

t Tests for Two Dependent Sample Means If we’re wondering whether an independent variable has some effect on the dependent variable then our null hypothesis is that there is no difference between the two paired measurements in our sample

t Tests for Two Dependent Sample Means If we’re wondering whether an independent variable has some effect on the dependent variable then our null hypothesis is that there is no difference between the two paired measurements in our sample Some differences would be positive, some would be negative, on average the difference would be zero

t Tests for Two Dependent Sample Means We can use a t-test to test if the sample of differences has a mean that is significantly different from zero This is done by simply treating your column of differences as a one-sample t-test with a null hypothesis that u = 0

t Tests for Two Dependent Sample Means Some curiosities that make your life easier with regard to paired t-tests –Note that: –And that n 1 always equals n 2 –As with the z-test, the t distribution is symmetric so you treat negative differences as if they were positive for comparing to t crit –Also as with the z-test, one- or two-tailed tests are possible…simply use the appropriate column from the t table

t Test for Two Independent Sample Means Often we have a situation in which repeated measures is inappropriate or impossible (e.g. any time measuring the dependant variable once alters subsequent measurements)

t Test for Two Independent Sample Means Often we have a situation in which repeated measures is inappropriate or impossible (e.g. any time measuring the dependant variable once alters subsequent measurements) In this situation we must use a between- subjects design

t Test for Two Independent Sample Means The data are laid out like the repeated measures case except they aren’t pairs of scores, the two columns are measurements of different subjects (objects, etc.) We thus usually only refer to a single measurement with respect to the mean of that sample Sample 1Sample 2Difference

t Test for Two Independent Sample Means The null hypothesis states that these two independent samples are random samples from the same population

t Test for Two Independent Sample Means The null hypothesis states that these two independent samples are random samples from the same population –so you would expect the difference to be zero on average

t Test for Two Independent Sample Means The null hypothesis states that these two independent samples are random samples from the same population –so you would expect the difference to be zero on average –therefore the numerator of the t statistic in this situation works just like the dependent samples case where

t Test for Two Independent Sample Means The denominator is different because…

t Test for Two Independent Sample Means The denominator is different because… How many degrees of freedom are there?

t Test for Two Independent Sample Means The denominator is different because… How many degrees of freedom are there? –The mean difference is based on two different samples, each with their own degrees of freedom

t Test for Two Independent Sample Means The denominator is different because… How many degrees of freedom are there? –The mean difference is based on two different samples, each with their own degrees of freedom –So there are n 1 -1+n 2 -1 = n 1 +n 2 -2 d.f.

t Test for Two Independent Sample Means The denominator is different because… How many degrees of freedom are there? –The mean difference is based on two different samples, each with their own degrees of freedom –So there are n 1 -1+n 2 -1 = n 1 +n 2 -2 d.f. –The best estimate of the population standard deviation will incorporate both samples so that it has more degrees of freedom

t Test for Two Independent Sample Means We can pool the sums of squares (which weights the variances according to the number in each sample)

t Test for Two Independent Sample Means We can pool the sums of squares (which weights the variances according to the number in each sample) Then divide by the pooled degrees of freedom to estimate

t Test for Two Independent Sample Means Estimate : Both samples contribute to the standard error of the mean differences so and…

t Test for Two Independent Sample Means Now we can construct a t statistic

t Test for Two Independent Sample Means Notice that this t statistic has more degrees of freedom than its dependent samples counterpart Why does a repeated measures design still tend to have more power?

t Test for Two Independent Sample Means Consider an example: –Are northbound drivers slower than southbound drivers on highway 2 ? –Null hypothesis: samples of n speeds taken from northbound and southbound traffic are from the same population –Alternative hypothesis: samples of southbound drivers are from a population with a mean greater than that of northbound drivers

t Test for Two Independent Sample Means For a one-tailed test at  =.05, with 8 d.f., t crit = We can therefore reject the null hypothesis and conclude that southbound drivers are faster.

t Test for Two Independent Sample Means Some caveats and disclaimers about independent-sample t-tests: –There is an assumption of equal variance in the two underlying populations If this assumption is violated, your Type I error rate is greater than the indicated alpha! However, for samples of equal n, the t-test is quite robust to violations of this assumption (so you usually don’t have to worry about it) –Note that n need not be equal! (but it’s better if possible)

Next Time: Too many t tests spoils the statistics