Research Methods Psy 598 Carolyn R. Fallahi, Ph. D.

Slides:



Advertisements
Similar presentations
Quantitative Data Analysis: Hypothesis Testing
Advertisements

PSY 307 – Statistics for the Behavioral Sciences Chapter 20 – Tests for Ranked Data, Choosing Statistical Tests.
Data Analysis Statistics. Inferential statistics.
WENDIANN SETHI SPRING 2011 SPSS ADVANCED ANALYSIS.
Decision Tree Type of Data Qualitative (Categorical) Type of Categorization One Categorical Variable Chi-Square – Goodness-of-Fit Two Categorical Variables.
Final Review Session.
Chapter 19 Data Analysis Overview
Data Analysis Statistics. Inferential statistics.
Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 8 Analyzing and Interpreting Quantitative.
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
Today Concepts underlying inferential statistics
Data Analysis Statistics. Levels of Measurement Nominal – Categorical; no implied rankings among the categories. Also includes written observations and.
Correlation and Regression Analysis
Summary of Quantitative Analysis Neuman and Robson Ch. 11
Chapter 14 Inferential Data Analysis
Statistics Idiots Guide! Dr. Hamda Qotba, B.Med.Sc, M.D, ABCM.
Statistical Analysis KSE966/986 Seminar Uichin Lee Oct. 19, 2012.
1 Overview of Major Statistical Tools UAPP 702 Research Methods for Urban & Public Policy Based on notes by Steven W. Peuquet, Ph.D.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Nonparametric or Distribution-free Tests
Inferential Statistics
Inferential statistics Hypothesis testing. Questions statistics can help us answer Is the mean score (or variance) for a given population different from.
Introduction to Statistics February 21, Statistics and Research Design Statistics: Theory and method of analyzing quantitative data from samples.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Understanding Statistics
Statistics Definition Methods of organizing and analyzing quantitative data Types Descriptive statistics –Central tendency, variability, etc. Inferential.
A Repertoire of Hypothesis Tests  z-test – for use with normal distributions and large samples.  t-test – for use with small samples and when the pop.
Copyright © 2012 Pearson Education. Chapter 23 Nonparametric Methods.
Research Methods in Human-Computer Interaction
EDLD 6392 Advanced Topics in Statistical Reasoning Texas A&M University-Kingsville Research Designs and Statistical Procedures.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Statistical analysis Prepared and gathered by Alireza Yousefy(Ph.D)
11 Chapter 12 Quantitative Data Analysis: Hypothesis Testing © 2009 John Wiley & Sons Ltd.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 26.
C M Clarke-Hill1 Analysing Quantitative Data Forming the Hypothesis Inferential Methods - an overview Research Methods.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Adjusted from slides attributed to Andrew Ainsworth
Chapter 13 CHI-SQUARE AND NONPARAMETRIC PROCEDURES.
Academic Research Academic Research Dr Kishor Bhanushali M
Experimental Research Methods in Language Learning Chapter 10 Inferential Statistics.
ANALYSIS PLAN: STATISTICAL PROCEDURES
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Chap 18-1 Copyright ©2012 Pearson Education, Inc. publishing as Prentice Hall Chap 18-1 Chapter 18 A Roadmap for Analyzing Data Basic Business Statistics.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and Methods and Applications CHAPTER 15 ANOVA : Testing for Differences among Many Samples, and Much.
Chapter 10 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
Copyright © 2011, 2005, 1998, 1993 by Mosby, Inc., an affiliate of Elsevier Inc. Chapter 19: Statistical Analysis for Experimental-Type Research.
Biostatistics Nonparametric Statistics Class 8 March 14, 2000.
Remember You just invented a “magic math pill” that will increase test scores. On the day of the first test you give the pill to 4 subjects. When these.
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent variable.
Chapter 21prepared by Elizabeth Bauer, Ph.D. 1 Ranking Data –Sometimes your data is ordinal level –We can put people in order and assign them ranks Common.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
Nonparametric Statistics
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 18 Part 5 Analysis and Interpretation of Data DIFFERENCES BETWEEN GROUPS AND RELATIONSHIPS.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Interpretation of Common Statistical Tests Mary Burke, PhD, RN, CNE.
Dr.Rehab F.M. Gwada. Measures of Central Tendency the average or a typical, middle observed value of a variable in a data set. There are three commonly.
Choosing and using your statistic. Steps of hypothesis testing 1. Establish the null hypothesis, H 0. 2.Establish the alternate hypothesis: H 1. 3.Decide.
Inferential Statistics Assoc. Prof. Dr. Şehnaz Şahinkarakaş.
Chapter 15 Analyzing Quantitative Data. Levels of Measurement Nominal measurement Involves assigning numbers to classify characteristics into categories.
 Test adopted depends on a number of factors: ◦ Design of study (e.g. within or between-participants?) ◦ Number of variables ◦ The nature of the data.
Statistics & Evidence-Based Practice
Nonparametric Statistics
Statistical tests for quantitative variables
Nonparametric Statistics
Introduction to Statistics
Ass. Prof. Dr. Mogeeb Mosleh
Parametric versus Nonparametric (Chi-square)
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Research Methods Psy 598 Carolyn R. Fallahi, Ph. D. Class #5 Research Methods Psy 598 Carolyn R. Fallahi, Ph. D.

Inferential Statistics Inferential statistics are used to draw conclusions about the reliability and generalizability of one’s findings.

Inferential Statistics There are two types of inference techniques that researchers use: Parametric techniques Nonparametric techniques

Parametric tests For example, a normal z-test (normal distribution that you use z statistics for it), student’s t-test. Whereas non-parametric tests often used the ranked sample values for testing a hypothesis about the population (e.g. Mann-Whitney U test, Wilcoxon test, sign test). For example, if I had 2 classes who took the Psy 598 exam and both were a bimodal distribution.

T-test for Means The observed differences between the means are then compared with this estimate. If the observed differences between are so large, relative to the estimate, that they are unlikely to be the result of error variance alone, the Ho is rejected.

Example of a t-Test Example of anorexia nervosa.

Parametric Techniques for categorical data T-test for differences in proportions. Categorical data – distinct categories, for example Republican and Democrat. Look at the proportion of Democrats versus Republicans that favor a certain issue, let’s say abortion.

Nonparametrics Nonparametric tests are used for testing the goodness-of-fit of an observed distribution to a theoretical one or may compare 2 observed distributions. This means that we can use nonparametric tests to see if a particular distribution will fit the data. For example, does IQ of college students at Central follow a proposed distribution. To test this, we use the goodness of fit test.

Analysis of Variance ANOVA – an extension of the t-test. Uses more than 2 levels of 1 IV and / or more than 1 IV.

Main Effect The main effect of an IV is the effect of the variable averaging over all levels of other variables in the experiment. ANOVA provides a significance test for the main effect of each variable in the design.

One-Way ANOVA The simplest form of ANOVA is one-way ANOVA. It is a statistical test applied to data collected on the basis of a simple randomized Ss design. Assume that you hypothesized that rats will increase their rate of self-administration of cocaine as access to cocaine decreases.

Two-way ANOVA The two-way ANOVA is a statistical test that is applied to data collected from a factorial design. A factorial design is one in which 2 or more IVs are studied simultaneously to determine their independent and interactive effects on the DV.

Two-way ANOVA It is used to analyze data from studies that investigate the simultaneous and interactive effects of 2 IVs. For example, assume that you want to conduct an experiment that investigates whether consuming caffeine at an early age sensitizes individuals to cocaine abuse.

Analysis of Covariance or ANCOVA Used when groups are not initially equivalent (quasi-experimental). A pretest (covariate) is used to adjust for pre-existing differences. In a sense, we subtract out the differences up front.

Multivariate analysis of variance MANOVA As stated earlier, the two inferential statistics most often used to analyze differences among means of a single DV: the t-test and the ANOVA. Sometimes we want to test differences between conditions on several DVs simultaneously. Because t-tests and ANOVAs can’t do this, we use the MANOVA.

Commonly used statistical tests Normal theory test Nonparametric test Purpose T test for indep samples Mann-Whitney U test; Compares 2 Wilcoxon rank sum test indep samples Paired t test Wilcoxon matched pairs examines a signed-rank test set of diffs. Pearson Correlation Spearman rank correlation Assess the Coefficient coefficient linear assoc. btwn 2 variables One way ANOVA Kruskal-Wallis ANOVA compares 3 or more groups. Two way ANOVA Friedman 2 way ANOVA compares groups classif by 2 diff. factors

Nonparametric tests for categorical data Chi-square test and the Komogorov-Smirnov tests are both suitable for testing the goodness-of-fit of an observed distribution to a theoretical one or may compare 2 observed distributions. Both of these tests are suitable for these purposes. Komogorow-Smirnov is more sensitive to the largest deviation between these 2 distributions.

The Goodness of Fit Test The goodness of fit test requires that you have categories into which the individual observations fall. That is, you make a single observation and record it as belonging to category a, b, or c. You repeat this for each observation. Each observation must be independent of the others.

Chi Square When individuals are classified into categories by 2 attributes, the resulting contingency table may be analyzed using a chi square test for association between the attribute. For example, if we wanted to find out if the proportion of students in favor of a certain political candidate varies according to their year in college. Here you have 4 years in college and male/female. This produces a contingency table. We could use Chi square to find out. If the sample sizes are small, then the Fisher’s exact test is used in this situation for small frequencies.

ChiSquare – the contingency test A second type of Chi Square test: provides information about independence between 2 variables. For example, is there independence between men and women majoring in psychology versus biology based on whether the person is male or female? Is sex predictable from the major? This is called a contingency test. This is like 2 categories: major and sex.

Nonparametric Techniques Quantitative data The Mann-Whitney U Test The parametric test similar to this is the t-test. Here we compare the medians of 2 different groups that dk follow a normal distribution, equality of variances, or any of the reasons that you use nonparametrics. The most important reason – not normal.

Nonparametric Techniques Quantitative data The Kruskal-Wallis one-way ANOVA This means you have 1 factors with several levels. For example, ADHD with several levels – hyperactive, inattentive type, and combination of both. The DV = how these levels effect learning.

Nonparametric Techniques Quantitative data The sign test – used for 1 factor having 2 levels. For example, Bipolar disorder, the 2 levels are mania and depression. Sign test looks at the direction. So you have the same treatment for these two problems and you want to know who responds better first. Our interest at this point is not the exact difference between the response time, but who responds earlier.

Control of Extraneous Variables Randomization Hold certain variables constant Build the variables into the design Matching Use subjects as their own controls Analysis of covariance

Regression and correlational methods Regression and correlational methods are applied to investigate the relationship between 2 or more variables. For regression, we are trying to look at one or more IVs and their effect on the DV or response. *Issue…causality. For example, we may have age at several levels as the IV and Ct mastery exams as the DV.

Correlation Correlation refers more generally to the relationship or interdependence between 2 variables and therefore applies to situations where regression may be inappropriate. Correlation looks at 2 at a time where regression looks at several IVs on 1 DV. Measures of correlation include the product-moment correlation coefficient which measures the degree of linear correlation between 2 continuous variables and , for ranked/ continuous/ small n variables, Kendall’s “tau” and Spearman’s “rho”.

Multiple linear regression Multiple linear regression is a method for fitting a relationship in which the DV is a linear function of several IVs. For example, age measured at different levels; GPA; # friends and their effects upon or more DV or predictor variables.

Multiple Correlation Multiple correlation refers to the degree of interdependency between variables in a group and is often calculated as a coefficient in the multiple regression context where it represents the measured correlation between observed values of the DV and the values predicted by the multiple regression equation. When you do a regression using several IVs in the model, multiple correlational measures the collective relationship of the IVs on the DV. For example, it is one number that shows how good the model is. It is called coefficient of determination.

Factor analysis Factor analysis and associated techniques in the multivariate analysis seek to explain the relations between the variables in a set, using the correlation matrix for the set. Principal component analysis establishes a set of uncorrelated combinations of the original variables which explain in decreasing order of magnitude the variation in the sample.

Discriminant function analysis Discriminant analysis deals with the problem of a single set of variables which have different mean values but identical correlations in 2 or more populations. A discriminant function is estimated using individuals from known populations and then used to classify unknown individuals.

Discriminant function analysis You are using this statistic to classify people (categorical variable) or subjects on qualitative variables. For example, you cannot use multiple regression when the criterion variable is categorical. For example, if the IVs are sex (male/female), SES (3 levels), race (4 levels) – these are all categorical IVs.

Path analysis The likelihood of a causal relationship between 3 or more variables. The essential idea behind path analysis is to formulate a theory about the possible causes of a particular problem. For example, school violence. The idea is to identify causal variables that could explain why school violence occurs and then to determine whether correlation among all the IVs are consistent with this theory.

Other techniques Other techniques, such as multi-dimensional scaling and cluster analysis are employed to explore the structural relationships between individuals for whom multiple observations are made. In this case, we might like to place the individual in the study into 5 clusters based on observations made. For example, you have 50 cities in the US and you want to put them into 5 clusters. This method sorts these cities into 5 groups based on characteristics of the cities, such as population, economical growth, pollution, etc.