Download presentation
Presentation is loading. Please wait.
Published bySilas Fletcher Modified over 8 years ago
1
Chapter 15 Analyzing Quantitative Data
2
Levels of Measurement Nominal measurement Involves assigning numbers to classify characteristics into categories Ordinal measurement Involves ranking objects based on their relative standing on an attribute
3
Levels of Measurement (cont’d) Interval measurement Occurs when objects are ordered on a scale that has equal distances between points on the scale Ratio measurement Occurs when there are equal distances between score units and there is a rational, meaningful zero
4
Statistical Analysis Descriptive statistics Used to describe and synthesize data Inferential statistics Used to make inferences about the population based on sample data
5
Descriptive Statistics: Frequency Distributions A systematic arrangement of numeric values on a variable from lowest to highest, and a count of the number of times each value was obtained
6
Frequency Distributions (cont’d) Frequency distributions can be described in terms of: –Shape –Central tendency –Variability Can be presented in tabular form (counts and percentages) Can be presented graphically (e.g., frequency polygons)
7
Shapes of Distributions 1.Symmetry Symmetric Skewed (assymetric) Positive skew (long tail points to the right) Negative skew (long tail points to the left)
8
Examples of Symmetric Distributions (Figure 15-2)
9
Examples of Skewed Distributions (Figure 15-3)
10
Shapes of Distributions (cont’d) 2.Peakedness (how sharp the peak is) 3. Modality (number of peaks) Unimodal (1 peak) Bimodal (2 peaks) Multimodal (2+ peaks)
11
Normal Distribution (Bell-Shaped Curve) Symmetric Unimodal Not too peaked, not too flat
12
Central Tendency Index of “typicalness” of set of scores that comes from center of the distribution Mode—the most frequently occurring score in a distribution 23 3 3 4 5 6 7 8 9 Mode = 3
13
Central Tendency (cont.) Median—the point in a distribution above which and below which 50% of cases fall 2 3 33 4 | 5 6 7 8 9 Median = 4.5 Mean—equals the sum of all scores divided by the total number of scores 2 3 3 3 4 5 6 7 8 9 Mean = 5.0
14
Comparison of Measures of Central Tendency Mode, useful mainly as gross descriptor, especially of nominal measures Median, useful mainly as descriptor of typical value when distribution is skewed Mean, most stable and widely used indicator of central tendency
15
Variability The degree to which scores in a distribution are spread out or dispersed Homogeneity—little variability Heterogeneity—great variability
16
Two Distributions of Different Variability (Figure 15-4)
17
Key Indexes of Variability Range: highest value minus lowest value Standard deviation (SD): average deviation of scores in a distribution Variance: a standard deviation, squared
18
Standard Deviations in a Normal Distribution (Figure 15-5)
19
Bivariate Descriptive Statistics Contingency Table A two-dimensional frequency distribution; frequencies of two variables are cross-tabulated “Cells” at intersection of rows and columns display counts and percentages Variables should be nominal or ordinal
20
Bivariate Descriptive Statistics (cont’d) Correlation Indicates direction and magnitude of relationship between two variables Used with ordinal, interval, or ratio measures Correlation coefficients (usually Pearson’s r) summarize information With multiple variables, a correlation matrix can be displayed
21
Inferential Statistics Used to make objective decisions about population parameters using sample data Based on laws of probability Use the concept of theoretical distributions
22
Sampling Distribution of the Mean A theoretical distribution of means for an infinite number of samples drawn from the same population, it: is always normally distributed has a mean that equals the population mean has a standard deviation (SD) called the standard error of the mean (SEM) has an SEM that is estimated from a sample SD and the sample size
23
Statistical Inference—Two Forms Estimation of parameters Hypothesis testing (more common)
24
Hypothesis Testing Based on rules of negative inference: research hypotheses are supported if null hypotheses can be rejected Involves statistical decision making to either: –accept the null hypothesis, or –reject the null hypothesis
25
Hypothesis Testing (cont’d) Researchers compute a test statistic with their data, then determine whether the statistic falls beyond the critical region in the relevant theoretical distribution
26
Hypothesis Testing (cont’d) If the value of the test statistic indicates that the null hypothesis is “improbable,” the result is statistically significant A nonsignificant result means that any observed difference or relationship could have resulted from chance fluctuations Statistical decisions are either correct or incorrect
27
Errors in Statistical Decisions Type I error: rejection of a null hypothesis when it should not be rejected—a false positive –Risk is controlled by the level of significance (alpha), e.g., =.05 or.01 Type II error: failure to reject a null hypothesis when it should be rejected—a false negative
28
Outcomes of Statistical Decision Making (Figure 15-7)
29
Parametric Statistics Involve the estimation of a parameter Require measurements on at least an interval scale Involve several assumptions (e.g., that variables are normally distributed in the population)
30
Nonparametric Statistics Do not estimate parameters Involve variables measured on a nominal or ordinal scale Have less restrictive assumptions about the shape of the variables’ distribution than parametric tests
31
Overview of Hypothesis-Testing Procedures Select an appropriate test statistic. Establish the level of significance (e.g., =.05). Compute test statistic with actual data. Determine degrees of freedom (df) for the test statistic.
32
Overview of Hypothesis-Testing Procedures (cont’d) Obtain a tabled value for the statistical test. Compare the test statistic to the tabled value. Make decision to accept or reject null hypothesis.
33
Commonly Used Bivariate Statistical Tests 1.t-Test 2.Analysis of variance (ANOVA) 3.Pearson’s r 4.Chi-square test
34
t -Test Tests the difference between two means –t-test for independent groups (between subjects) –t-test for dependent (paired) groups (within subjects)
35
Analysis of Variance (ANOVA) Tests the difference between more than 2 means –One-way ANOVA (e.g., 3 groups) –Multifactor (e.g., two-way) ANOVA –Repeated measures ANOVA (within subjects)
36
Correlation Pearson’s r, a parametric test Tests that the relationship between two variables is not zero Used when measures are on an interval or ratio scale
37
Chi-Square Test Tests the difference in proportions in categories within a contingency table A nonparametric test
38
Multivariate Statistics Statistical procedures for analyzing relationships among 3 or more variables Two commonly used procedures in nursing research: –Multiple regression –Analysis of covariance
39
Multiple Linear Regression Used to predict a dependent variable based on two or more independent (predictor) variables Dependent variable is continuous (interval or ratio-level data) Predictor variables are continuous (interval or ratio) or dichotomous
40
Multiple Correlation Coefficient (R ) The correlation index for a dependent variable and 2+ independent (predictor) variables: R Does not have negative values: it shows strength of relationships, but not direction Can be squared (R 2 ) to estimate the proportion of variability in the dependent variable accounted for by the independent variables
41
Analysis of Covariance (ANCOVA) Extends ANOVA by removing the effect of extraneous variables (covariates) before testing whether mean group differences are statistically significant
42
Analysis of Covariance (cont’d) Levels of measurement of variables: Dependent variable is continuous— ratio or interval level Independent variable is nominal (group status) Covariates are continuous or dichotomous
43
Discriminant Function Analysis Used to predict categorical dependent variables (e.g., compliant/noncompliant) based on 2+ predictor variables Accommodates predictors that are continuous or dichotomous
44
Logistic Regression (Logit Analysis) Analyzes relationships between a nominal-level dependent variable and 2+ independent variables Uses a different estimation procedure from discriminant function analysis Yields an odds ratio—an index of relative risk, i.e., the risk of an outcome occurring given one condition, versus the risk of it occurring given a different condition
45
Factor Analysis Used to reduce a large set of variables into a smaller set of underlying dimensions (factors) Used primarily in developing scales and complex instruments
46
Multivariate Analysis of Variance The extension of ANOVA to more than one dependent variable Abbreviated as MANOVA Can be used with covariates: Multivariate analysis of covariance (MANCOVA)
47
Causal Modeling Tests a hypothesized multivariable causal explanation of a phenomenon Includes: –Path analysis –Linear structural relations analysis (LISREL)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.