Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bivariate Analyses.

Similar presentations


Presentation on theme: "Bivariate Analyses."— Presentation transcript:

1 Bivariate Analyses

2 Bivariate Procedures I Overview
Chi-square test T-test Correlation

3 Chi-Square Test Relationships between nominal variables Types:
2x2 chi-square Gender by Political Party 2x3 chi-square Gender by Dosage (Hi vs. Med. Vs. Low)

4 Starting Point: The Crosstab Table
Example: Gender (IV) Males Females Democrat Party (DV) Republican Total

5 Column Percentages Gender (IV) Males Females Democrat 9% 91%
Party (DV) Republican 91% 9% Total 100% 100%

6 Row Percentages Gender (IV) Males Females Total Democrat 5% 95% 100%
Party (DV) Republican 83% 17% 100%

7 Full Crosstab Table Males Females Total Democrat 1 20 21 5% 95%
5% 95% 9% 91% 64% Republican 83% 17% 91% 9% 36% Total 33% 67% 100%

8 Research Question and Hypothesis
Is gender related to party affiliation? Hypothesis: Men are more likely than women to be Republicans Null hypothesis: There is no relation between gender and party

9 Testing the Hypothesis
Eyeballing the table: Seems to be a relationship Is it significant? Or, could it be just a chance finding? Logic: Is the finding different enough from the null? Chi-square answers this question What factors would it take into account?

10 Factors Taken into Consideration
1. Magnitude of the difference 2. Sample size Biased coin example Magnitude of difference: 60% heads vs. 99% heads Sample size: 10 flips vs. 100 flips vs. 1 million flips

11 Chi-square Chi-Square starts with the frequencies:
Compare observed frequencies with frequencies we expect under the null hypothesis

12 What would the Frequencies be if there was No Relationship?
Males Females Total Democrat Republican Total

13 Expected Frequencies (Null)
Males Females Total Democrat Republican Total

14 Comparing the Observed and Expected Cell Frequencies
Formula:

15 Calculating the Expected Frequency
Simple formula for expected cell frequencies Row total x column total / Total N 21 x 11 / 33 = 7 21 x 22 / 33 = 14 12 x 11 / 33 = 4 12 x 22 / 33 = 8

16 Observed and Expected Cell Frequencies
Males Females Total Democrat Republican Total

17 Plugging into the Formula
O - E Square Square/E Cell A = 1-7 = /7 = 5.1 Cell B = = /14 = 2.6 Cell C = 10-4 = /4 = 9 Cell D = 2-8 = /8 = 4.5 Sum = 21.2 Chi-square = 21.2

18 Is the chi-square significant?
Significance of the chi-square: Great differences between observed and expected lead to bigger chi-square How big does it have to be for significance? Depends on the “degrees of freedom” Formula for degrees of freedom: (Rows – 1) x (Columns – 1)

19 Chi-square Degrees of Freedom
2 x 2 chi-square = 1 3 x 3 = ? 4 x 3 = ?

20 Chi-square Critical Values
df P = 0.05 P = 0.01 P = 0.001 1 3.84 6.64 10.83 2 5.99 9.21 13.82 3 7.82 11.35 16.27 4 9.49 13.28 18.47 5 11.07 15.09 20.52 6 12.59 16.81 22.46 7 14.07 18.48 24.32 8 15.51 20.09 26.13 9 16.92 21.67 27.88 10 18.31 23.21 29.59 * If chi-square is > than critical value, relationship is significant

21 Chi-Square Computer Printout

22 Chi-Square Computer Printout

23 Multiple Chi-square Exact same procedure as 2 variable X2
Used for more than 2 variables E.g., 2 x 2 x 2 X2 Gender x Hair color x eye color

24 Multiple chi-square example

25 Multiple chi-square example

26 The T-test Groups T-test Pairs T-test
Comparing the means of two nominal groups E.g., Gender and IQ E.g., Experimental vs. Control group Pairs T-test Comparing the means of two variables Comparing the mean of a variable at two points in time

27 Logic of the T-test A T-test considers three things:
1. The group means 2. The dispersion of individual scores around the mean for each group (sd) 3. The size of the groups

28 Difference in the Means
The farther apart the means are: The more confident we are that the two group means are different Distance between the means goes in the numerator of the t-test formula

29 Why Dispersion Matters
Small variances Large variances

30 Size of the Groups Larger groups mean that we are more confident in the group means IQ example: Women: mean = 103 Men: mean = 97 If our sample was 5 men and 5 women, we are not that confident If our sample was 5 million men and 5 million women, we are much more confident

31 The four t-test formulae
1. Matched samples with unequal variances 2. Matched samples with equal variances 3. Independent samples with unequal variances 4. Independent samples with equal variances

32 All four formulae have the same
Numerator X1 - X2 (group one mean - group two mean) What differentiates the four formulae is their denominator denominator is “standard error of the difference of the means” each formula has a different standard error

33 Independent sample with unequal variances formula
Standard error formula (denominator):

34 T-test Value Look up the T-value in a T-table (use absolute value )
First determine the degrees of freedom ex. df = (N1 - 1) + (N2 - 1) = 70 For 70 df at the .05 level =1.67 ex > 1.67: Reject the null (means are different)

35 Groups t-test printout example

36 Pairs t-test example

37 Pearson Correlation Coefficient (r )
Characteristics of correlational relationships: 1. Strength 2. Significance 3. Directionality 4. Curvilinearity

38 Strength of Correlation:
Strong, weak and non-relationships Nature of such relations can be observed in scatter diagrams Scatter diagram One variable on x axis and the other on the y-axis of a graph Plot each case according to its x and y values

39 Scatterplot: Strong relationship
B O K R E A D I N G Years of Education

40 Scatterplot: Weak relationship
M E Years of Education

41 Scatterplot: No relationship
Years of Education

42 Strength increases… As the points more closely conform to a straight line Drawing the best fitting line between the points: “the regression line” Minimizes the distance of the points from the line: “least squares” Minimizing the deviations from the line

43 Significance of the relationship
Whether we are confident that an observed relationship is “real” or due to chance What is the likelihood of getting results like this if the null hypothesis were true? Compare observed results to expected under the null If less than 5% chance, reject the null hypothesis

44 Directionality of the relationship
Correlational relationship can be positive or negative Positive relationship High scores on variable X are associated with high scores on variable Y Negative relationship High scores on variable X are associated with low scores on variable Y

45 Positive relationship example
B O K R E A D I N G Years of Education

46 Negative relationship example
C I L P E J U D Years of Education

47 Curvilinear relationships
Positive and negative relationships are “straight-line” or “linear” relationships Relationships can also be strong and curvilinear too Points conform to a curved line

48 Curvilinear relationship example
F A M I L Y S Z E SES

49 Curvilinear relationships
Linear statistics (e.g. correlation coefficient, regression) can mask a significant curvilinear relationship Correlation coefficient would indicate no relationship

50 Pearson Correlation Coefficient
Numerical expression of: Strength and Direction of straight-line relationship Varies between –1 and 1

51 Correlation coefficient outcomes
-1 is a perfect negative relationship -.7 is a strong negative relationship -.4 is a moderate negative relationship -.1 is a weak negative relationship 0 is no relationship .1 is a weak positive relationship .4 is a moderate positive relationship .7 is a strong positive relationship 1 is a perfect positive relationship

52 Pearson’s r (correlation coefficient)
Used for interval or ratio variables Reflects the extent to which cases have similar z-scores on variables X and Y Positive relationship—z-scores have the same sign Negative relationship—z-scores have the opposite sign

53 Positive relationship z-scores
Person Xz Yz A B C D E

54 Negative relationship z-scores
Person Xz Yz A B C D E

55 Conceptual formula for Pearson’s r
Multiply each cases z-score Sum the products Divide by N

56 Significance of Pearson’s r
Pearson’s r tells us the strength and direction Significance is determined by converting the r to a t ratio and looking it up in a t table Null: r = .00 How different is what we observe from null? Less than .05?

57 Computer Printout


Download ppt "Bivariate Analyses."

Similar presentations


Ads by Google