Statistical vs. Practical Significance

Slides:



Advertisements
Similar presentations
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Advertisements

C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis.
STA291 Statistical Methods Lecture 23. Difference of means, redux Default case: assume no natural connection between individual observations in the two.
Comparing Two Population Parameters
Comparing Two Means.
One sample means Testing a sample mean against a population mean.
Regression and correlation methods
SPSS Session 5: Association between Nominal Variables Using Chi-Square Statistic.
RELIABILITY Reliability refers to the consistency of a test or measurement. Reliability studies Test-retest reliability Equipment and/or procedures Intra-
Bivariate Analyses.
Statistical Issues in Research Planning and Evaluation
Beyond Null Hypothesis Testing Supplementary Statistical Techniques.
INDEPENDENT SAMPLES T Purpose: Test whether two means are significantly different Design: between subjects scores are unpaired between groups.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
PSY 307 – Statistics for the Behavioral Sciences
Matching level of measurement to statistical procedures
Lecture 9: One Way ANOVA Between Subjects
Correlations and T-tests
PSYC512: Research Methods PSYC512: Research Methods Lecture 9 Brian P. Dyre University of Idaho.
Independent Sample T-test Often used with experimental designs N subjects are randomly assigned to two groups (Control * Treatment). After treatment, the.
Sample Size Determination In the Context of Hypothesis Testing
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE © 2012 The McGraw-Hill Companies, Inc.
Data Analysis Statistics. Inferential statistics.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Independent Samples t-Test What is the Purpose?What are the Assumptions?How Does it Work?What is Effect Size?
Data Analysis Statistics. Levels of Measurement Nominal – Categorical; no implied rankings among the categories. Also includes written observations and.
The t Tests Independent Samples.
Chapter 14 Inferential Data Analysis
Simple Linear Regression Analysis
AM Recitation 2/10/11.
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
F OUNDATIONS OF S TATISTICAL I NFERENCE. D EFINITIONS Statistical inference is the process of reaching conclusions about characteristics of an entire.
Education Research 250:205 Writing Chapter 3. Objectives Subjects Instrumentation Procedures Experimental Design Statistical Analysis  Displaying data.
Department of Cognitive Science Michael J. Kalsher PSYC 4310 COGS 6310 MGMT 6969 © 2015, Michael Kalsher Unit 1B: Everything you wanted to know about basic.
Statistical Power 1. First: Effect Size The size of the distance between two means in standardized units (not inferential). A measure of the impact of.
Statistics for the Behavioral Sciences Second Edition Chapter 11: The Independent-Samples t Test iClicker Questions Copyright © 2012 by Worth Publishers.
Statistical Analysis. Statistics u Description –Describes the data –Mean –Median –Mode u Inferential –Allows prediction from the sample to the population.
Data Analysis (continued). Analyzing the Results of Research Investigations Two basic ways of describing the results Two basic ways of describing the.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Research Project Statistical Analysis. What type of statistical analysis will I use to analyze my data? SEM (does not tell you level of significance)
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Education 795 Class Notes Data Analyst Pitfalls Difference Scores Effects Sizes Note set 12.
Section 9-1: Inference for Slope and Correlation Section 9-3: Confidence and Prediction Intervals Visit the Maths Study Centre.
Review Hints for Final. Descriptive Statistics: Describing a data set.
ITEC6310 Research Methods in Information Technology Instructor: Prof. Z. Yang Course Website: c6310.htm Office:
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 11: Bivariate Relationships: t-test for Comparing the Means of Two Groups.
Chapter 10 The t Test for Two Independent Samples
Introducing Communication Research 2e © 2014 SAGE Publications Chapter Seven Generalizing From Research Results: Inferential Statistics.
T Test for Two Independent Samples. t test for two independent samples Basic Assumptions Independent samples are not paired with other observations Null.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Handout Six: Sample Size, Effect Size, Power, and Assumptions of ANOVA EPSE 592 Experimental Designs and Analysis in Educational Research Instructor: Dr.
Chapter 13 Understanding research results: statistical inference.
Methods of Presenting and Interpreting Information Class 9.
CHAPTER 15: THE NUTS AND BOLTS OF USING STATISTICS.
Inference about the slope parameter and correlation
1. According to ______ the larger the sample, the closer the sample mean is to the population mean. (p. 251) Murphy’s law the law of large numbers the.
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Applied Statistical Analysis
Chapter 6 Making Sense of Statistical Significance: Decision Errors, Effect Size and Statistical Power Part 1: Sept. 18, 2014.
The t distribution and the independent sample t-test
Richard S. Balkin, Ph.D., LPC
Introduction to Hypothesis Testing
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE
InferentIal StatIstIcs
REVIEW Course Review.
Presentation transcript:

Statistical vs. Practical Significance

Statistical Significance Significant differences (i.e., reject the null hypothesis) means that differences in group means are not likely due to sampling error. The problem is that statistically significant differences can be found even with very small differences if the sample size is large enough.

Statistical Significance In fact, differences between any sample means will be significant if the sample is large enough. For example – men and women have different average IQs

Practical Significance Practical (or clinical) significance asks the larger question about differences “Are the differences between samples big enough to have real meaning.” Although men and women undoubtedly have different IQs, is that difference large enough to have some practical implication

Practical Significance The fifth edition of the APA (2001) Publication Manual states: that it is almost always necessary to include some index of effect size or strength of relationship in your Results section.… The general principle to be followed … is to provide the reader not only with information about statistical significance but also with enough information to assess the magnitude of the observed effect or relationship. (pp. 25–26)

Practical Significance Generally assessed with some measure of effect size Effect size can be grouped into two categories: Difference measures Variance accounted for measures

Difference effect sizes Simple mean difference Suppose you design at control group experiment to evaluate the effects of CBT on depression. Experimental group post test score = 18 Control group post test score = 16 Difference = 18 – 16 = 2

Difference effect sizes Problem with simple mean difference Dependent on the scale of measurement Ignores normal variation in scores For example, if the following example was based on a scale with a SD of 15 points, a 2 point difference would be small – treatment would only effect depression by .13 SDs. If the example was based on a scale with a SD of 1 point, a 2 point difference would be very large – treatment had a 2 SD effect

Difference effect sizes We can overcome this problem by standardizing the mean differences One measure of this was done by Gene Glass D = (meantx – meancontrol)/ Sdcontrol Other SDs may be used such as a pooled (combined) SD from the Tx and Control groups

If variances are equal

If variances are unequal

Difference effect sizes: Interpreting Cohen proposed a general method for interpreting these type of effect sizes d = .2 small effect d = .5 medium effect d = .8 large effect This is a guideline for interpretation. You need to interpret effect sizes in the context of the research

Variance accounted for measures When comparing variables, variance accounted for measures tell us how well one variable predicts another or the magnitude of the relation. R2 is one such measure from correlational or regression analysis. Eta squared (η²) is often used in ANOVA as a measure of shared variance. Omega squared (ω2) is also used with ANOVA

Variance accounted for measures: Interpreting Correlations can be judged as: R = .1 small R = .3 moderate R = .5 large For measures of variance based on a squared value – take the square root to get a correlation

Confidence Intervals Statistics are used to estimate the true population value. When providing statistics (estimates of population values) it is useful to provide a range of values that are likely to include the true population value. Calculated with the standard error of the statistic

Confidence Intervals for means Confidence intervals = mean ± z(SEM) Z = 1.96 for a 95% confidence interval (you can estimate with Z=2 for a 95% confidence interval) If the mean of a sample = 100 and the SEM = 2 Then a 95% confidence interval would be: 100 ± 1.96(2) = 100 ± 3.92 Or 100 ± 2(2) = 100 ± 4 is close enough for govt. work

Confidence Intervals Use confidence intervals when you want to show where some true value is likely to be Reporting test results