LECTURE 11 Hypotheses about Correlations EPSY 640 Texas A&M University.

Slides:



Advertisements
Similar presentations
Descriptive Statistics-II
Advertisements

Dummy Variables. Introduction Discuss the use of dummy variables in Financial Econometrics. Examine the issue of normality and the use of dummy variables.
1 SPSS output & analysis. 2 The Regression Equation A line in a two dimensional or two-variable space is defined by the equation Y=a+b*X The Y variable.
What is Chi-Square? Used to examine differences in the distributions of nominal data A mathematical comparison between expected frequencies and observed.
Tests of Significance for Regression & Correlation b* will equal the population parameter of the slope rather thanbecause beta has another meaning with.
Correlation Mechanics. Covariance The variance shared by two variables When X and Y move in the same direction (i.e. their deviations from the mean are.
Classical Regression III
T-Tests.
t-Tests Overview of t-Tests How a t-Test Works How a t-Test Works Single-Sample t Single-Sample t Independent Samples t Independent Samples t Paired.
T-Tests.
Chapter Seventeen HYPOTHESIS TESTING
PSY 340 Statistics for the Social Sciences Chi-Squared Test of Independence Statistics for the Social Sciences Psychology 340 Spring 2010.
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Hypothesis Testing Steps of a Statistical Significance Test. 1. Assumptions Type of data, form of population, method of sampling, sample size.
Hypothesis testing applied to means. Characteristics of the Sampling Distribution of the mean The sampling distribution of means will have the same mean.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 9: Hypothesis Tests for Means: One Sample.
Ch 15 - Chi-square Nonparametric Methods: Chi-Square Applications
Inference about a Mean Part II
Copyright © 2014 Pearson Education, Inc.12-1 SPSS Core Exam Guide for Spring 2014 The goal of this guide is to: Be a side companion to your study, exercise.
Chapter 11: Inference for Distributions
LECTURE 13 PATH MODELING EPSY 640 Texas A&M University.
EPSY 651: Structural Equation Modeling I. Where does SEM fit in Quantitative Methodology? Draws on three traditions in mathematics and science: Psychology.
Mann-Whitney and Wilcoxon Tests.
Nonparametrics and goodness of fit Petter Mostad
Goodness of Fit Test for Proportions of Multinomial Population Chi-square distribution Hypotheses test/Goodness of fit test.
AM Recitation 2/10/11.
Estimation and Hypothesis Testing Faculty of Information Technology King Mongkut’s University of Technology North Bangkok 1.
Inferential Statistics: SPSS
Correlation and Regression
Single-Sample T-Test Quantitative Methods in HPELS 440:210.
SEM Analysis SPSS/AMOS
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
ANOVA One Way Analysis of Variance. ANOVA Purpose: To assess whether there are differences between means of multiple groups. ANOVA provides evidence.
Testing Hypothesis That Data Fit a Given Probability Distribution Problem: We have a sample of size n. Determine if the data fits a probability distribution.
I271B The t distribution and the independent sample t-test.
Chapter Outline Goodness of Fit test Test of Independence.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
1 G Lect 7a G Lecture 7a Comparing proportions from independent samples Analysis of matched samples Small samples and 2  2 Tables Strength.
- We have samples for each of two conditions. We provide an answer for “Are the two sample means significantly different from each other, or could both.
The Chi-Square Distribution. Preliminary Idea Sum of n values of a random variable.
Inferences Concerning Variances
Ch8.2 Ch8.2 Population Mean Test Case I: A Normal Population With Known Null hypothesis: Test statistic value: Alternative Hypothesis Rejection Region.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Chapter 13- Inference For Tables: Chi-square Procedures Section Test for goodness of fit Section Inference for Two-Way tables Presented By:
The general structural equation model with latent variates Hans Baumgartner Penn State University.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial.
Research Methodology Lecture No :26 (Hypothesis Testing – Relationship)
Chi Square Test for Goodness of Fit Determining if our sample fits the way it should be.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
Lec. 19 – Hypothesis Testing: The Null and Types of Error.
GOSSET, William Sealy How shall I deal with these small batches of brew?
Chapter 11: Categorical Data n Chi-square goodness of fit test allows us to examine a single distribution of a categorical variable in a population. n.
Class Seven Turn In: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 For Class Eight: Chapter 20: 18, 20, 24 Chapter 22: 34, 36 Read Chapters 23 &
CHI SQUARE DISTRIBUTION. The Chi-Square (  2 ) Distribution The chi-square distribution is the probability distribution of the sum of several independent,
Structural Equation Modeling using MPlus
Chapter 4. Inference about Process Quality
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Hypothesis Testing II: The Two-sample Case
Estimation & Hypothesis Testing for Two Population Parameters
Part Three. Data Analysis
Review of Chapter 11 Comparison of Two Populations
Hypothesis Tests for a Population Mean in Practice
Chapter 9 Hypothesis Testing
The t distribution and the independent sample t-test
Chapter 9: Hypothesis Tests Based on a Single Sample
Quantitative Methods in HPELS HPELS 6210
Hypothesis Testing.
Statistical Inference for the Mean: t-test
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

LECTURE 11 Hypotheses about Correlations EPSY 640 Texas A&M University

Hypotheses about Correlations One sample tests for Pearson r Two sample tests for Pearson r Multisample test for Pearson r Assumptions: normality of x, y being correlated

One Sample Test for Pearson r Null hypothesis:  = 0, Alternate   0 test statistic: t = r/ [(1- r 2 ) / (n-2)] 1/2 with degrees of freedom = n-2

One Sample Test for Pearson r ex. Descriptive Statistics for Kindergarteners on a Reading Test (from SPSS) MeanStd. DeviationN Naming letters Overall reading Correlations NamingOverall Naming letters ** Sig. (1-tailed)..000 N7676 Overall reading.784**1.000 Sig. (1-tailed).000. N7676 ** Correlation is significant at the 0.01 level (1-tailed).

One Sample Test for Pearson r Null hypothesis:  = c, Alternate   c test statistic: z = (Zr - Zc )/ [1/(n-3)] 1/2 where z=normal statistic, Zr = Fisher Z transform

Fisher’s Z transform Zr = tanh -1 r = 1/2 ln[1+  r  /(1 -  r |) This creates a new variable with mean Z  and SD 1/  1/(n-3) which is normally distributed

Non-null r example Null:  (girls) =.784 Alternate:  (girls) .784 Data: r =.845, n= 35 Z  (girls=.784) = 1.055, Zr(girls=.845)=1.238 z = ( )/[1/(35-3)] 1/2 =.183/(1/ ) = 1.035, nonsig.

Two Sample Test for Difference in Pearson r’s Null hypothesis:  1 =  2 Alternate hypothesis  1   2 test statistic: z =( Zr 1 - Zr 2 ) / [1/(n 1 -3) + 1/(n 2 -3)] 1/2 where z= normal statistic

Example Null hypothesis:  girls =  boys Alternate hypothesis  girls   2boys test statistic: r girls =.845, r boys =.717 n girls = 35, n boys = 41 z = Z(.845) - Z(.717) / [1/(35-3) + 1/(41-3)] 1/2 = ( ) / [1/32 + 1/38] 1/2 =.337 /.240 = 1.405, nonsig.

Multisample test for Pearson r Three or more samples: Null hypothesis:  1 =  2 =  3 etc Alternate hypothesis: some  i   j Test statistic:  2 =  w i Z 2 i - w.Z 2 w which is chi-square distributed with #groups-1 degrees of freedom and w i = n i -3, w.=  w i, and Z w =  w i Z i /w.

Example Multisample test for Pearson r Nonsig.

Multiple Group Models of Correlation SEM approach models several groups with either the SAME or Different correlations: X X y y boys girls  xy = a

Multigroup SEM SEM Analysis produces chi-square test of goodness of fit (lack of fit) for the hypothesis about ALL groups at once Other indices: Comparative Fit Index (CFI), Normed Fit Index (NFI), Root Mean Square Error of Approximation (RMSEA) CFI, NFI >.95 means good fit RMSEA <.06 means good fit

Multigroup SEM SEM assumes large sample size, multinormality of all variables Robust as long as skewness and kurtosis are less than  3, sample size is probably > 100 per group (200 is better), or few parameters are being estimated (sample size as low as 70 per group may be OK with good distribution characteristics)