American Psychological Association’s Task Force on Statistical Inference TFSI 1999
Guidelines and Explanations Applications of significance testing in psychology
Two types of guidelines 1) Deal with statistics directly (which test, effect size, looking at your data for abnormalities) 2) Will not be statistically testable but has huge implications for your study (randomization, confounds, missing data)
Methods Define population clearly Interpretation of results depends on characteristics of population Define control or comparison group clearly Design
Methods Best way to limit bias and confounds Humans are awful randomizers Use a computer Random Assignment
Methods Probably not control group - “contrast group” Describe which variables have been controlled Which have not and how they might confound Non-Random Assignment
Methods Show process that led to sample size decision Show how effect-size estimates were derived from previous research and theory Power and Sample Size
Methods Explicitly define variables in study and show how they are related to goals Be clear & consistent with measurement units Use actual measurement as name Variables/Construct
Results Missing data? Attrition? Nonresponse? What techniques to show results are not a result of anomalies - outliers, etc…. Before stats LOOK AT YOUR DATA Complications
Analysis Occam’s Razor - simple is better Understand how analysis works and what it’s doing Minimally sufficient analysis
Analysis Examine residuals graphically - not necessarily statistically Don’t use kurtosis or skewness Assumptions
Analysis Report p value, confidence interval and effect size Never use “accept the null hypothesis” Hypothesis test
Analysis Can use unstandardized measure if units of measurement are meaningful (# of cigs/day) Add brief comments that puts effect size in practical and theoretical context Effect Size
Analysis Put ‘em in - pretty much same reason as effect size - stability across studies. Interval Estimates
Analysis ANOVA with Tukey’s HSD - too conservative Find middle ground - don’t test everything Enough to find interesting results but not so few that the hypotheses and results are boring Multiple Corrections
Analysis Use both tables and figures Tell your story but simplicity better Include confidence intervals if possible Tables and Figures
Thank you. Any Questions?