Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D

Slides:



Advertisements
Similar presentations
Chapter 10: The t Test For Two Independent Samples
Advertisements

Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Hypothesis: It is an assumption of population parameter ( mean, proportion, variance) There are two types of hypothesis : 1) Simple hypothesis :A statistical.
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Hypothesis Testing IV Chi Square.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Ethan Cooper (Lead Tutor)
Significance Testing Chapter 13 Victor Katch Kinesiology.
Single Sample t-test Purpose: Compare a sample mean to a hypothesized population mean. Design: One group.
Review: What influences confidence intervals?
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 9 Chicago School of Professional Psychology.
T-tests Computing a t-test  the t statistic  the t distribution Measures of Effect Size  Confidence Intervals  Cohen’s d.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 3 Chicago School of Professional Psychology.
Lecture 8 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 12 Chicago School of Professional Psychology.
Hypothesis test with t – Exercise 1 Step 1: State the hypotheses H 0 :  = 50H 1 = 50 Step 2: Locate critical region 2 tail test,  =.05, df = =24.
T-Tests Lecture: Nov. 6, 2002.
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 5 Chicago School of Professional Psychology.
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 6 Chicago School of Professional Psychology.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 11: Power.
The one sample t-test November 14, From Z to t… In a Z test, you compare your sample to a known population, with a known mean and standard deviation.
Hypothesis Testing Using The One-Sample t-Test
Hypothesis Testing: Two Sample Test for Means and Proportions
Chapter 9: Introduction to the t statistic
Chapter 9 Hypothesis Testing II. Chapter Outline  Introduction  Hypothesis Testing with Sample Means (Large Samples)  Hypothesis Testing with Sample.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
T Test for One Sample. Why use a t test? The sampling distribution of t represents the distribution that would be obtained if a value of t were calculated.
AM Recitation 2/10/11.
Hypothesis Testing:.
Chapter 13 – 1 Chapter 12: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Errors Testing the difference between two.
Week 9 Chapter 9 - Hypothesis Testing II: The Two-Sample Case.
Overview Definition Hypothesis
Jeopardy Hypothesis Testing T-test Basics T for Indep. Samples Z-scores Probability $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
Copyright © 2012 by Nelson Education Limited. Chapter 8 Hypothesis Testing II: The Two-Sample Case 8-1.
Chapter 8 Introduction to Hypothesis Testing
Single-Sample T-Test Quantitative Methods in HPELS 440:210.
T-distribution & comparison of means Z as test statistic Use a Z-statistic only if you know the population standard deviation (σ). Z-statistic converts.
Week 8 Chapter 8 - Hypothesis Testing I: The One-Sample Case.
Hypothesis Testing: One Sample Cases. Outline: – The logic of hypothesis testing – The Five-Step Model – Hypothesis testing for single sample means (z.
Chapter 9 Hypothesis Testing II: two samples Test of significance for sample means (large samples) The difference between “statistical significance” and.
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
COURSE: JUST 3900 TIPS FOR APLIA Developed By: Ethan Cooper (Lead Tutor) John Lohman Michael Mattocks Aubrey Urwick Chapter : 10 Independent Samples t.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
Introduction to the t-statistic Introduction to Statistics Chapter 9 Mar 5-10, 2009 Classes #15-16.
H1H1 H1H1 HoHo Z = 0 Two Tailed test. Z score where 2.5% of the distribution lies in the tail: Z = Critical value for a two tailed test.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 1): Two-tail Tests & Confidence Intervals Fall, 2008.
Introduction to the t-statistic Introduction to Statistics Chapter 9 Oct 13-15, 2009 Classes #16-17.
DIRECTIONAL HYPOTHESIS The 1-tailed test: –Instead of dividing alpha by 2, you are looking for unlikely outcomes on only 1 side of the distribution –No.
Chapter 9 Introduction to the t Statistic. 9.1 Review Hypothesis Testing with z-Scores Sample mean (M) estimates (& approximates) population mean (μ)
Chapter 8 Parameter Estimates and Hypothesis Testing.
Chapter 9: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Type I and II Errors Testing the difference between two means.
1 URBDP 591 A Lecture 12: Statistical Inference Objectives Sampling Distribution Principles of Hypothesis Testing Statistical Significance.
© Copyright McGraw-Hill 2004
Sec 8.5 Test for a Variance or a Standard Deviation Bluman, Chapter 81.
Chapters 6 & 7 Overview Created by Erin Hodgess, Houston, Texas.
Copyright © 1998, Triola, Elementary Statistics Addison Wesley Longman 1 Assumptions 1) Sample is large (n > 30) a) Central limit theorem applies b) Can.
Chapter 10 Section 5 Chi-squared Test for a Variance or Standard Deviation.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Introduction to the t statistic. Steps to calculate the denominator for the t-test 1. Calculate variance or SD s 2 = SS/n-1 2. Calculate the standard.
Statistics for the Behavioral Sciences, Sixth Edition by Frederick J. Gravetter and Larry B. Wallnau Copyright © 2004 by Wadsworth Publishing, a division.
Confidence Intervals. Point Estimate u A specific numerical value estimate of a parameter. u The best point estimate for the population mean is the sample.
Chapter 10: The t Test For Two Independent Samples.
Chapter 9 Introduction to the t Statistic
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill.
Hypothesis Testing: One Sample Cases
Hypothesis Testing: Two Sample Test for Means and Proportions
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Test Review: Ch. 7-9
Quantitative Methods in HPELS HPELS 6210
Hypothesis Testing.
Presentation transcript:

Lecture 7 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D Chicago School of Professional Psychology Lecture 7 Kin Ching Kong, Ph.D

Agenda The t Statistic Hypothesis Testing with t The problem with z-score as test statistic Estimated standard error (sM) The t formula Degrees of freedom The shape of the t distribution The t-distribution table Hypothesis Testing with t Measuring Effect Size for the t Statistic

Intro. to the t Statistic z = M – m = obtained difference between data & hypothesis sM standard distance expected by chance Problem of using the z scores for hypothesis testing: Usually we don’t know the population standard deviation, sM Solution: estimate population variance with sample variance sample variance is an unbiased estimate of population variance

The Estimated Standard Error (sM) Sample Variance & Standard Deviation: s2 = SS = SS s = = n – 1 df Standard Error: sM = s or sM = Estimated Standard Error: sM = s or sM = The estimated standard error is used when s is unknown & provides an estimate of the standard distance between M and m

The t Statistic The t Statistic: t = M – m = M – m sM The t statistic is used to test hypotheses about an unknown population m when s is unknown. t = sample mean – population mean estimated standard error = obtained difference difference expected by chance.

Degrees of Freedom & t How well t approximate z depends on the df. Degree of Freedom (df): the number of scores in a sample that are free to vary. df = n-1 The greater the df, the better s2 represent s2, and the better the t statistic approximate the z-score

The t Distribution The exact shape of a t distribution changes with df. The larger the df, the more closely the t distribution approximates a normal distribution. Unlike the z distribution which is the same for any sample size, and always has a mean of 0 and a standard deviation of 1, t distributions are actually a family of distributions, each has a different standard deviation. Distributions of t are bell-shaped and symmetrical and have a mean of 0, but has more variability than a normal z distribution. Figure 9.1 of your book The t distribution is flatter and more variable than the z distribution because the standard error used in the z formula is a constant while the estimated standard error used in the t formula is a variable. So sample with the same M will have the same z scores, but not the same t.

The t Distribution Table The t Distribution Table (Table B.2) Table 9.1 of your book The top two rows show the proportions (or probabilities in one or two tails. The first column list the df. The numbers in the body of the table are the values of t that separates the tail(s) from the body of the distribution. Examples: df = 3, find the t value that separates the top 5% of the distribution. Figure 9.2 n = 6, find the t values that separates the extreme 5% Ans to 2: df = 5, t = + 2.571

Hypothesis Testing with The t Statistic Example 9.1 of your book: n = 9 insectivorous birds are tested in a box that has two separate chambers, one with two large eye-spots painted. The birds are placed in the box for 60 minutes and are free to go from one chamber to the other. The time in the plain chamber is recorded. Figure 9.4 M = 36 minutes, SS = 72 were obtained. Did the eye-spots have an effect on behavior? Use a = .05.

Hypothesis Testing with The t Statistic (Cont.) Step 1: State the Hypotheses H0: m plain side = 30 minutes (no preference for either side) H1: m plain side = 30 minutes (preference for either side) Step 2: Define the Critical Region For a = .05, df = n-1 = 9-1 = 8 tcritical = +2.306 Figure 9.5 of your book Step 3: Compute Test Statistic t = M – m sM = s2 = SS/df = 72/8 = 9 sM = =1 t = (36 – 30)/1 = 6.00 Step 4: Make a Decision Since 6.00 > 2.306, that is, the sample mean is in the critical region, we reject H0 and conclude that the eye-spots pattern appears to have an effect on the birds’ behavior.

Your Turn, Exercise 1 A sample, n = 25, is randomly selected from a population with m = 50, and a treatment is administered to the individuals in the sample. After treatment, the sample is found to have a M = 54 with a standard deviation of s = 6. Using a two-tailed test, is the result significant at the .05 level? Answer

Effect Size with the t Statistic (Cohen’s d) Significance vs Effect Size: t test tells you whether there is an effect, a difference that is significantly greater than chance (i.e. standard error), but it doesn’t tell you how big the effect or difference. Cohen’s d: Measures effect size in units of standard deviation Cohen’s d = mean difference standard deviation for t tests: Cohen’s d = mean difference sample standard deviation e.g. In the previous example, you found that the treatment had a significant effect, but you don’t know how much. M = 54, m = 50, s = 6 Cohen’s d = (54 – 50)/6 = 0.67 so the effect size is 0.67 standard deviation.

Effect Size with the t Statistic (r2) Another way to measure effect size is to measure the amount of variability in the scores that is due to, (explained by, accounted for by) the treatment. Logic: the treatment caused the scores to decrease or increase, thus changing the variance. We can measure the treatment effect by figuring out how much of the variability in the scores is accounted for by the treatment.

Effect Size with the t Statistic (r2) Demo. Example 9.1: null hypothesis: treatment (eye-spot) has no effect on the bird’s behavior. H0: m = 30 M= 36 Figure 9.6 :Frequency distribution for the sample. (the scores are differ from each other and different from m of 30. Part of the difference are due to treatment, part due to individual difference, i.e. error) SStotal = SStreatment + SSerror SSerror = SS with treatment effect removed Figure 9.7 Table 9.2 SStreatment = SStotal – SSerror = 396 – 72 = 324 % Variability account for by Tx = variability accounted for by Tx total variability = SStreatment = 324 = 0.8182 (81.82%) SStotal 396

Effect Size with the t Statistic (r2 continue) r2: percentage of variance explained by the treatment effect. r2 = t 2 t 2 + df Interpreting r2: Small effect: 0.01 < r2 < 0.09 Medium effect: 0.09 < r2 < 0.25 Large effect: r2 > 0.25

Your turn: Exercise 2 A population with m = 90. A sample is selected and treatment is administered. After treatment, M = 92, and sample variance, s2 = 25. With a two-tail test and alpha level set to .05 a) if n=25, is the 2-point effect significant? What is the effect size as measured by Cohen’s d and r2? b) if n = 100, is the 2-point effect significant? What is the effect size as measured by Cohen’s d and r2? c) is the t test affected by n? What about Cohen’s d and r2 ? Answer a Answer b Answer c