Interpretation of Common Statistical Tests Mary Burke, PhD, RN, CNE.

Slides:



Advertisements
Similar presentations
CHAPTER TWELVE ANALYSING DATA I: QUANTITATIVE DATA ANALYSIS.
Advertisements

David Pieper, Ph.D. STATISTICS David Pieper, Ph.D.
Chapter 11 Contingency Table Analysis. Nonparametric Systems Another method of examining the relationship between independent (X) and dependant (Y) variables.
Statistical Tests Karen H. Hagglund, M.S.
By Wendiann Sethi Spring  The second stages of using SPSS is data analysis. We will review descriptive statistics and then move onto other methods.
QUANTITATIVE DATA ANALYSIS
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Descriptive Statistics Primer
Final Review Session.
Matching level of measurement to statistical procedures
Correlations and T-tests
Social Research Methods
Data Analysis Statistics. Inferential statistics.
Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 8 Analyzing and Interpreting Quantitative.
Today Concepts underlying inferential statistics
Data Analysis Statistics. Levels of Measurement Nominal – Categorical; no implied rankings among the categories. Also includes written observations and.
Correlation and Regression Analysis
Summary of Quantitative Analysis Neuman and Robson Ch. 11
Chapter 14 Inferential Data Analysis
Richard M. Jacobs, OSA, Ph.D.
Statistical hypothesis testing – Inferential statistics II. Testing for associations.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Inferential Statistics
Leedy and Ormrod Ch. 11 Gray Ch. 14
Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242.
Statistical Analysis I have all this data. Now what does it mean?
Selecting the Correct Statistical Test
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
Hypothesis Testing Charity I. Mulig. Variable A variable is any property or quantity that can take on different values. Variables may take on discrete.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Class Meeting #11 Data Analysis. Types of Statistics Descriptive Statistics used to describe things, frequently groups of people.  Central Tendency 
Descriptive Statistics e.g.,frequencies, percentiles, mean, median, mode, ranges, inter-quartile ranges, sds, Zs Describe data Inferential Statistics e.g.,
Statistics Definition Methods of organizing and analyzing quantitative data Types Descriptive statistics –Central tendency, variability, etc. Inferential.
Statistical Analysis I have all this data. Now what does it mean?
Statistics 11 Correlations Definitions: A correlation is measure of association between two quantitative variables with respect to a single individual.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Recap of data analysis and procedures Food Security Indicators Training Bangkok January 2009.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 26.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
ITEC6310 Research Methods in Information Technology Instructor: Prof. Z. Yang Course Website: c6310.htm Office:
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
CHI SQUARE TESTS.
Chapter 13 CHI-SQUARE AND NONPARAMETRIC PROCEDURES.
Experimental Research Methods in Language Learning Chapter 10 Inferential Statistics.
Hypothesis Testing. Why do we need it? – simply, we are looking for something – a statistical measure - that will allow us to conclude there is truly.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 12 Testing for Relationships Tests of linear relationships –Correlation 2 continuous.
Introduction to Basic Statistical Tools for Research OCED 5443 Interpreting Research in OCED Dr. Ausburn OCED 5443 Interpreting Research in OCED Dr. Ausburn.
Chapter Eight: Using Statistics to Answer Questions.
Chapter 6: Analyzing and Interpreting Quantitative Data
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent variable.
PART 2 SPSS (the Statistical Package for the Social Sciences)
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
Chapter 13 Understanding research results: statistical inference.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Chapter 15 Analyzing Quantitative Data. Levels of Measurement Nominal measurement Involves assigning numbers to classify characteristics into categories.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Appendix I A Refresher on some Statistical Terms and Tests.
Statistics & Evidence-Based Practice
Basic Statistics Overview
Social Research Methods
Introduction to Statistics
Ass. Prof. Dr. Mogeeb Mosleh
Unit XI: Data Analysis in nursing research
15.1 The Role of Statistics in the Research Process
Parametric versus Nonparametric (Chi-square)
Chapter Nine: Using Statistics to Answer Questions
Presentation transcript:

Interpretation of Common Statistical Tests Mary Burke, PhD, RN, CNE

Learning Objectives Upon completion of this presentation, the learner will be able to: Differentiate between descriptive and inferential statistical analyses Describe selected statistical tests Interpret data analysis results and articulate their meaning and significance or insignificance

Descriptive Statistics A group of tests that are used to classify or summarize numerical data (i.e. describe) The level of data used is nominal (no particular order to the data, usually categories) For example, can describe a sample based on: Gender Marital status Employment status Age (depending on how it is measured)

Inferential Statistics Specific tests that can be used to make generalizations about a population by studying a sample from that population We test a hypothesis to see if results from a sample can be generalized to a specific population The levels of data include ordinal and ratio (interval)

Review of Common Descriptive Statistical Tests Frequencies and Percentages An arrangement of values that shows the number of times a given score or group of scores occur VariableFrequencyPercentValid Percent Cum Percent Male2856 Female Total

Frequencies and Percentages Can use results of frequencies and percentages to create graphics Histograms Bar graphs Scatter plots (can show outliers of data) These are additional ways to “describe” your data

Measures of Central Tendency Another way to “describe” a sample Examples include: Mean Median Mode Range Important to determine these values to see if the data is normally distributed (impacts which test is run)

Inferential Statistics Used to analyze a sample and from this analysis, make a generalization about the population from which this sample came. Two types of Inferential Statistics: Confidence Intervals Hypothesis testing

Confidence Intervals Gives a range of values for an unknown parameter of the population by measuring a statistical sample Is a range of values that we are confident contains the population parameter Expressed in terms of an interval and the degree of confidence that the parameter is within that interval Example: We can say with confidence that 95 percent of all sample means will fall between and 43.98

Level of Significance The criterion used to reject or accept the null hypothesis Defined as the probability of making a Type I error (Incorrectly rejecting the null hypothesis in favor of the alternative hypothesis). Researchers usually use either 0.01 or 0.05 (meaning that the decision to reject the hypothesis may be incorrect 1% (0.01) or 5% (0.05) of the time

Steps in Hypothesis Testing State the hypotheses (null and alternative) Formulate an analysis plan (how to use sample data to accept or reject hypotheses Analyze sample data Interpret the results (reject or accept null hypothesis)

Type I and Type II Errors

Common Inferential Statistics

Independent Samples T-Test A statistical procedure allowing us to establish whether the observed differences between two average scores are significant or due to chance. To conduct a t-test, we are assuming that the variances (the degree to which the scores are spread) of the two groups are equal. Examples of using a t-test: Trying to determine if post-test scores increased from pre-test scores after an educational session

Assumptions of an Independent Samples t- test The data must be continuous Data must be normally distributed A simple random sample is used

Interpretation of Independent Sample t-tests Need to first look at Levine’s test. Want the p value to be above This tells us that the data is homogeneous Then look at the column for the t value and the p value. If p is less than 0.05, there is a significant difference between the means. We would reject the null hypothesis. If the p value is greater than 0.05, there is no significant difference between the means. We would accept the null hypothesis. NOTE: Interpret Paired samples t-test in the same manner.

Analysis of T-tests

Paired Samples T-test Have two different groups in the sample (i.e treatment and control group) There is a “matched” pair for each data occurrence in each group

Assumptions of Paired Samples T-test Data is continuous The differences for the matched pairs follow a normal distribution The sample of matched pairs is from a simple random sample Participants are measured twice (pre/post design)

Analysis of ANOVAs An ANOVA is used to determine if there is a significant difference between the means of three or more groups. To determine significance, look at the p value. If the p is greater than 0.05, there is significant difference. However, an ANOVA does not show where the difference is. You would have to use post-hoc testing such as Tukey HSD, LSD, Scheffe, etc.

Analysis of Variance (ANOVA) Used to determine if there are significant difference between the means of three or more independent groups Cannot tell which groups are significantly different. Will have to run post-hoc tests to determine which group is different. Why use over multiple t-tests? Every time you run a t-test, there is a 5% chance you will make a Type I error. Three t-tests would be a 15% chance of making a Type I error.

Assumptions of ANOVAs The dependent variable is normally distributed for each group being compared There is homogeneity of variance (the population variances of each group is equal). Mutually exclusive groups

Analysis of ANOVA

Correlations Used to determine the strength and direction of the relationship between two variables Correlations do not indicate that one variable caused a change in the other variable

Pearson’s R Correlation Coefficient Measures how well a straight line fits through a scatter of points plotted on an x and y axis. Variables should be measured as continuous (ratio) The correlation coefficient shows the strength and direction of the relationship (ranges from -1 to + 1). The higher the number, the stronger the relationship. If the correlation coefficient (r) is positive, this means that when one variable increases so does the other If the correlation coefficient (r) is negative, this is a inverse relationship meaning that as one variable increases, the other decreases.

Spearman’s Rho The non-parametric version of the Pearson’s R Measures the strength of the relationship between two ranked variables Expressed as P or r s Assumptions: Variables are either ordinal, interval or ratio A monontonic (non-linear) relationship between the variables exisit The assumptions for the Pearson’s r are violated

Analysis of Pearson r Correlations The Pearson’s r is.777 and the p value is.000. This indicates that there is a strong positive correlation between height and distance jumped.

Analysis of Spearman’s Rho Analysis is the same as the Pearson’s r. Look at the p value to determine the significance of the correlation Look at the r value to determine the strength and direction of the correlation

Regression Analysis An important application of the concept of correlation Used to “predict” the scores on one variable based on knowledge of scores on another variable Assumption is that the two variables are linearly related (correlation between the variables is strong (greater than 0.5) Example of the use of regression analysis: which factors are strong predictors of a nurse educator’s technostress?

Analysis of Regression Analysis The R is the correlation between variables. The R 2 indicates the amount of variance in the DV is explained by the IV The p value is less than.05 so the overall model is significant

The Coefficients table provides us with the necessary information to predict price from income, as well as determine whether income contributes statistically significantly to the model (by looking at the "Sig." column).

Nonparametric Testing "People can come up with statistics to prove anything....14% of people know that.” Homer Simpson

Chi Square Tests for the association between two categorical variables The variables are “independent” or “related” Assumptions: Simple random sampling Each population is at least 10 times large as its respective sample Categorical variables The expected value of each cell in the contingency table is greater than 5

Interpreting a Chi Square

We look at the Pearson’s Chi Square row to interpret the results. The significance is.485 so there is no relationship between the variables.

Mann Whitney U Non-parametric version of the independent t-test. Compares means of a sample Used when cannot assume a normal distribution of the data Assumptions: Random samples from population Independent samples At least ordinal data

Interpreting a Mann Whitney U

The first table shows that the diet group had the highest mean rank The second table indicates a significant p value (0.14). This means that the diet group was statistically higher than the exercise group.

Wilcox Signed Ranks Non-parametric version of the paired samples t-test Based on the order in which the observations fall Each observation has a “rank” in the sample The Wilcox Signed Ranks looks at the sum of the ranks to see if there is differences between the two samples. Can use for normally distributed and non-normally distributed samples

Interpreting Wilcox Signed Ranks 11 participants had a higher pain score pre treatment, 4 had a higher score after treatment and 10 had no change.

Interpreting Wilcox Signed Ranks The significant is (> than 0.05) which is not significant.

Kruskal-Wallis Non-parametric version of the one-way ANOVA Used with two or more independent samples Analyzes population distribution of ranks Assumptions: Random sample Independence within and among each sample Variables at least at the ordinal level of data

Interpreting Kruskal Wallis The mean rank column gives the different scores for the drugs Look at the Chi Square significance. P is less than 0.05 so there is a significant difference between the drugs. Like an ANOVA, post hoc testing would need to be done to find the difference.

All the statistics in the world cannot measure the warmth of a smile. Chris Hart

References Cronk, B. C. (2012). How to use spss: A step-by-step guide to analysis and interpretation (7th ed). Glendale, CA: Pyrczak Publishing. Munro, B. (2005). Statistical methods for health care research (5th ed). Philadelphia, PA: Lippincott, Williams & Wilkins.