Download presentation
1
Research Methods Psy 598 Carolyn R. Fallahi, Ph. D.
Class #5 Research Methods Psy 598 Carolyn R. Fallahi, Ph. D.
2
Inferential Statistics
Inferential statistics are used to draw conclusions about the reliability and generalizability of one’s findings.
3
Inferential Statistics
There are two types of inference techniques that researchers use: Parametric techniques Nonparametric techniques
4
Parametric tests For example, a normal z-test (normal distribution that you use z statistics for it), student’s t-test. Whereas non-parametric tests often used the ranked sample values for testing a hypothesis about the population (e.g. Mann-Whitney U test, Wilcoxon test, sign test). For example, if I had 2 classes who took the Psy 598 exam and both were a bimodal distribution.
5
T-test for Means The observed differences between the means are then compared with this estimate. If the observed differences between are so large, relative to the estimate, that they are unlikely to be the result of error variance alone, the Ho is rejected.
6
Example of a t-Test Example of anorexia nervosa.
7
Parametric Techniques for categorical data
T-test for differences in proportions. Categorical data – distinct categories, for example Republican and Democrat. Look at the proportion of Democrats versus Republicans that favor a certain issue, let’s say abortion.
8
Nonparametrics Nonparametric tests are used for testing the goodness-of-fit of an observed distribution to a theoretical one or may compare 2 observed distributions. This means that we can use nonparametric tests to see if a particular distribution will fit the data. For example, does IQ of college students at Central follow a proposed distribution. To test this, we use the goodness of fit test.
9
Analysis of Variance ANOVA – an extension of the t-test.
Uses more than 2 levels of 1 IV and / or more than 1 IV.
10
Main Effect The main effect of an IV is the effect of the variable averaging over all levels of other variables in the experiment. ANOVA provides a significance test for the main effect of each variable in the design.
11
One-Way ANOVA The simplest form of ANOVA is one-way ANOVA.
It is a statistical test applied to data collected on the basis of a simple randomized Ss design. Assume that you hypothesized that rats will increase their rate of self-administration of cocaine as access to cocaine decreases.
12
Two-way ANOVA The two-way ANOVA is a statistical test that is applied to data collected from a factorial design. A factorial design is one in which 2 or more IVs are studied simultaneously to determine their independent and interactive effects on the DV.
13
Two-way ANOVA It is used to analyze data from studies that investigate the simultaneous and interactive effects of 2 IVs. For example, assume that you want to conduct an experiment that investigates whether consuming caffeine at an early age sensitizes individuals to cocaine abuse.
14
Analysis of Covariance or ANCOVA
Used when groups are not initially equivalent (quasi-experimental). A pretest (covariate) is used to adjust for pre-existing differences. In a sense, we subtract out the differences up front.
15
Multivariate analysis of variance MANOVA
As stated earlier, the two inferential statistics most often used to analyze differences among means of a single DV: the t-test and the ANOVA. Sometimes we want to test differences between conditions on several DVs simultaneously. Because t-tests and ANOVAs can’t do this, we use the MANOVA.
16
Commonly used statistical tests
Normal theory test Nonparametric test Purpose T test for indep samples Mann-Whitney U test; Compares 2 Wilcoxon rank sum test indep samples Paired t test Wilcoxon matched pairs examines a signed-rank test set of diffs. Pearson Correlation Spearman rank correlation Assess the Coefficient coefficient linear assoc. btwn 2 variables One way ANOVA Kruskal-Wallis ANOVA compares 3 or more groups. Two way ANOVA Friedman 2 way ANOVA compares groups classif by 2 diff. factors
17
Nonparametric tests for categorical data
Chi-square test and the Komogorov-Smirnov tests are both suitable for testing the goodness-of-fit of an observed distribution to a theoretical one or may compare 2 observed distributions. Both of these tests are suitable for these purposes. Komogorow-Smirnov is more sensitive to the largest deviation between these 2 distributions.
18
The Goodness of Fit Test
The goodness of fit test requires that you have categories into which the individual observations fall. That is, you make a single observation and record it as belonging to category a, b, or c. You repeat this for each observation. Each observation must be independent of the others.
19
Chi Square When individuals are classified into categories by 2 attributes, the resulting contingency table may be analyzed using a chi square test for association between the attribute. For example, if we wanted to find out if the proportion of students in favor of a certain political candidate varies according to their year in college. Here you have 4 years in college and male/female. This produces a contingency table. We could use Chi square to find out. If the sample sizes are small, then the Fisher’s exact test is used in this situation for small frequencies.
20
ChiSquare – the contingency test
A second type of Chi Square test: provides information about independence between 2 variables. For example, is there independence between men and women majoring in psychology versus biology based on whether the person is male or female? Is sex predictable from the major? This is called a contingency test. This is like 2 categories: major and sex.
21
Nonparametric Techniques Quantitative data
The Mann-Whitney U Test The parametric test similar to this is the t-test. Here we compare the medians of 2 different groups that dk follow a normal distribution, equality of variances, or any of the reasons that you use nonparametrics. The most important reason – not normal.
22
Nonparametric Techniques Quantitative data
The Kruskal-Wallis one-way ANOVA This means you have 1 factors with several levels. For example, ADHD with several levels – hyperactive, inattentive type, and combination of both. The DV = how these levels effect learning.
23
Nonparametric Techniques Quantitative data
The sign test – used for 1 factor having 2 levels. For example, Bipolar disorder, the 2 levels are mania and depression. Sign test looks at the direction. So you have the same treatment for these two problems and you want to know who responds better first. Our interest at this point is not the exact difference between the response time, but who responds earlier.
24
Control of Extraneous Variables
Randomization Hold certain variables constant Build the variables into the design Matching Use subjects as their own controls Analysis of covariance
25
Regression and correlational methods
Regression and correlational methods are applied to investigate the relationship between 2 or more variables. For regression, we are trying to look at one or more IVs and their effect on the DV or response. *Issue…causality. For example, we may have age at several levels as the IV and Ct mastery exams as the DV.
26
Correlation Correlation refers more generally to the relationship or interdependence between 2 variables and therefore applies to situations where regression may be inappropriate. Correlation looks at 2 at a time where regression looks at several IVs on 1 DV. Measures of correlation include the product-moment correlation coefficient which measures the degree of linear correlation between 2 continuous variables and , for ranked/ continuous/ small n variables, Kendall’s “tau” and Spearman’s “rho”.
27
Multiple linear regression
Multiple linear regression is a method for fitting a relationship in which the DV is a linear function of several IVs. For example, age measured at different levels; GPA; # friends and their effects upon or more DV or predictor variables.
28
Multiple Correlation Multiple correlation refers to the degree of interdependency between variables in a group and is often calculated as a coefficient in the multiple regression context where it represents the measured correlation between observed values of the DV and the values predicted by the multiple regression equation. When you do a regression using several IVs in the model, multiple correlational measures the collective relationship of the IVs on the DV. For example, it is one number that shows how good the model is. It is called coefficient of determination.
29
Factor analysis Factor analysis and associated techniques in the multivariate analysis seek to explain the relations between the variables in a set, using the correlation matrix for the set. Principal component analysis establishes a set of uncorrelated combinations of the original variables which explain in decreasing order of magnitude the variation in the sample.
30
Discriminant function analysis
Discriminant analysis deals with the problem of a single set of variables which have different mean values but identical correlations in 2 or more populations. A discriminant function is estimated using individuals from known populations and then used to classify unknown individuals.
31
Discriminant function analysis
You are using this statistic to classify people (categorical variable) or subjects on qualitative variables. For example, you cannot use multiple regression when the criterion variable is categorical. For example, if the IVs are sex (male/female), SES (3 levels), race (4 levels) – these are all categorical IVs.
32
Path analysis The likelihood of a causal relationship between 3 or more variables. The essential idea behind path analysis is to formulate a theory about the possible causes of a particular problem. For example, school violence. The idea is to identify causal variables that could explain why school violence occurs and then to determine whether correlation among all the IVs are consistent with this theory.
33
Other techniques Other techniques, such as multi-dimensional scaling and cluster analysis are employed to explore the structural relationships between individuals for whom multiple observations are made. In this case, we might like to place the individual in the study into 5 clusters based on observations made. For example, you have 50 cities in the US and you want to put them into 5 clusters. This method sorts these cities into 5 groups based on characteristics of the cities, such as population, economical growth, pollution, etc.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.