Download presentation
Presentation is loading. Please wait.
Published byPierce Lucas Modified over 9 years ago
1
Chapter Eight: Using Statistics to Answer Questions
2
Statistics
3
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data.
4
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data. Two main branches of statistics assist your decisions in different ways.
5
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data. Two main branches of statistics assist your decisions in different ways. Descriptive Statistics
6
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data. Two main branches of statistics assist your decisions in different ways. Descriptive Statistics Descriptive statistics are used to summarize any set of numbers so you can understand and talk about them more intelligibly.
7
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data. Two main branches of statistics assist your decisions in different ways. Descriptive Statistics Inferential Statistics
8
Chapter Eight: Using Statistics to Answer Questions Statistics Statistics is a branch of mathematics that involves the collection, analysis, and interpretation of data. Two main branches of statistics assist your decisions in different ways. Descriptive Statistics Inferential Statistics Inferential statistics are used to analyze data after you have conducted an experiment to determine whether your independent variable had a significant effect.
9
Descriptive Statistics We use descriptive statistics when we want to summarize a set or distribution of numbers in order to communicate their essential characteristics.
10
Descriptive Statistics We use descriptive statistics when we want to summarize a set or distribution of numbers in order to communicate their essential characteristics. One of these essential characteristics is a measure of the typical or representative score, called a measure of central tendency.
11
Descriptive Statistics We use descriptive statistics when we want to summarize a set or distribution of numbers in order to communicate their essential characteristics. One of these essential characteristics is a measure of the typical or representative score, called a measure of central tendency. A second essential characteristic that we need to know about a distribution is how much variability or spread exists in the scores.
12
Scales of Measurement Measurement
13
Scales of Measurement Measurement The assignment of symbols to events according to a set of rules.
14
Scales of Measurement Measurement The assignment of symbols to events according to a set of rules. Scale of measurement A set of measurement rules
15
Scales of Measurement Nominal Scale
16
Scales of Measurement Nominal Scale A scale of measurement in which events are assigned to categories.
17
Scales of Measurement Nominal Scale A scale of measurement in which events are assigned to categories. Ordinal Scale
18
Scales of Measurement Nominal Scale Ordinal Scale A scale of measurement that permits events to be rank ordered.
19
Scales of Measurement Nominal Scale Ordinal Scale Interval Scale
20
Scales of Measurement Nominal Scale Ordinal Scale Interval Scale A scale of measurement that permits rank ordering of events with the assumption of equal intervals between adjacent events.
21
Scales of Measurement Nominal Scale Ordinal Scale Interval Scale Ratio Scale
22
Scales of Measurement Nominal Scale Ordinal Scale Interval Scale Ratio Scale A scale of measurement that permits rank ordering of events with the assumption of equal intervals between adjacent events and a true zero point.
23
Measures of Central Tendency Mode
24
Measures of Central Tendency Mode The score in a distribution that occurs most often.
25
Measures of Central Tendency Mode The score in a distribution that occurs most often. Median
26
Measures of Central Tendency Mode The score in a distribution that occurs most often. Median The number that divides a distribution in half.
27
Measures of Central Tendency Mode The score in a distribution that occurs most often. Median The number that divides a distribution in half. Mean
28
Measures of Central Tendency Mode The score in a distribution that occurs most often. Median The number that divides a distribution in half. Mean The arithmetic average of a set of numbers. It is found by adding all the scores in a set and then dividing by the number of scores.
29
Graphing Your Results Pie Chart
30
Graphing Your Results Pie Chart Graphical representation of the percentage allocated to each alternative as a slice of a circular pie.
31
Graphing Your Results Pie Chart Histogram
32
Graphing Your Results Pie Chart Histogram A graph in which the frequency for each category of a quantitative variable is represented as a vertical column that touches the adjacent column.
33
Graphing Your Results Pie Chart Histogram Bar Graph
34
Graphing Your Results Pie Chart Histogram Bar Graph A graph in which the frequency for each category of a qualitative variable is represented as a vertical column. The columns of a bar graph do not touch.
35
Graphing Your Results Pie Chart Histogram Bar Graph Frequency Polygon
36
Graphing Your Results Pie Chart Histogram Bar Graph Frequency Polygon A graph that is constructed by placing a dot in the center of each bar of a histogram and then connecting the dots.
37
Graphing Your Results Pie Chart Histogram Bar Graph Frequency Polygon Line Graph
38
Graphing Your Results Pie Chart Histogram Bar Graph Frequency Polygon Line Graph A graph that is frequently used to depict the results of an experiment. The vertical or y axis is known as the ordinate and the horizontal or x axis is known as the abscissa.
39
Calculating and Computing Statistics You can find statistical formulas in Appendix B of your text.
40
Calculating and Computing Statistics You can find statistical formulas in Appendix B of your text. All statistical formulas merely require addition, subtraction, multiplication, division, and finding square roots.
41
Calculating and Computing Statistics You can find statistical formulas in Appendix B of your text. All statistical formulas merely require addition, subtraction, multiplication, division, and finding square roots. Your department may have access to some standard statistical packages such as SPSS, SAS, BMD, Minitab, etc.
42
Measure of Variability Variability
43
Measure of Variability Variability Range
44
Measures of Variability Variability Range A measure of variability that is computed by subtracting the smallest score from the largest score.
45
Measures of Variability Variability Range Variance
46
Measures of Variability Variability Range Variance A single number that represents the total amount of variation in a distribution.
47
Measures of Variability Variability Range Variance Standard Deviation
48
Measures of Variability Variability Range Variance Standard Deviation The standard deviation is the square root of the variance. It has important relations to the normal curve.
49
Measures of Variability Normal distribution
50
Measures of Variability Normal distribution A symmetrical, bell-shaped distribution having half the scores above the mean and half the scores below the mean.
51
Correlation Correlation coefficient
52
Measures of Variability Correlation coefficient A single number representing the degree of relation between two variables.
53
Measures of Variability Correlation coefficient A single number representing the degree of relation between two variables. The value of a correlation coefficient can range from –1 to +1.
54
The Pearson Product-Moment Correlation Coefficient Pearson Product-Moment Correlation Coefficient
55
The Pearson Product-Moment Correlation Coefficient Pearson Product-Moment Correlation Coefficient (r) This type of correlation coefficient coefficient is calculated when both the X variable and the Y variable are interval or ratio scale measurements and the data appear to be linear.
56
The Pearson Product-Moment Correlation Coefficient Pearson Product-Moment Correlation Coefficient (r) This type of correlation coefficient coefficient is calculated when both the X variable and the Y variable are interval or ratio scale measurements and the data appear to be linear. Other correlation coefficients can be calculated when one or both of the variables are not interval or ratio scale measurements or when the data do not fall on a straight line.
57
Inferential Statistics What is Significant?
58
Inferential Statistics What is Significant? An inferential statistical test can tell us whether the results of an experiment can occur frequently or rarely by chance.
59
Inferential Statistics What is Significant? An inferential statistical test can tell us whether the results of an experiment can occur frequently or rarely by chance. Inferential statistics with small values occur frequently by chance.
60
Inferential Statistics What is Significant? An inferential statistical test can tell us whether the results of an experiment can occur frequently or rarely by chance. Inferential statistics with small values occur frequently by chance. Inferential statistics with large values occur rarely by chance.
61
Inferential Statistics Null hypothesis
62
Inferential Statistics Null hypothesis A hypothesis that says that all differences between groups are due to chance (i.e., not the operation of the IV).
63
Inferential Statistics Null hypothesis A hypothesis that says that all differences between groups are due to chance (i.e., not the operation of the IV). If a result occurs often by chance, we say that it is not significant and conclude that our IV did not affect the DV.
64
Inferential Statistics Null hypothesis A hypothesis that says that all differences between groups are due to chance (i.e., not the operation of the IV). If a result occurs often by chance, we say that it is not significant and conclude that our IV did not affect the DV. If the result of our inferential statistical test occurs rarely by chance (i.e., it is significant), then we conclude that some factor other than chance is operative.
65
The t Test t Test
66
The t Test t Test The t test is an inferential statistical test used to evaluate the difference between the means of two groups.
67
The t Test t Test The t test is an inferential statistical test used to evaluate the difference between the means of two groups. Degrees of freedom
68
The t Test t Test The t test is an inferential statistical test used to evaluate the difference between the means of two groups. Degrees of freedom The ability of a number in a specified set to assume any value.
69
The t Test Interpretation of t value
70
The t Test Interpretation of t value Determine the degrees of freedom (df) involved.
71
The t Test Interpretation of t value Determine the degrees of freedom (df) involved. Use the degrees of freedom to enter a t table.
72
The t Test Interpretation of t value Determine the degrees of freedom (df) involved. Use the degrees of freedom to enter a t table. This table contains t values that occur by chance.
73
The t Test Interpretation of t value Determine the degrees of freedom (df) involved. Use the degrees of freedom to enter a t table. This table contains t values that occur by chance. Compare your t value to these chance values.
74
The t Test Interpretation of t value Determine the degrees of freedom (df) involved. Use the degrees of freedom to enter a t table. This table contains t values that occur by chance. Compare your t value to these chance values. To be significant, the calculated t must be equal to or larger than the one in the table.
75
One-Tail Versus Two-Tail Tests of Significance Directional versus Nondirectional hypotheses
76
One-Tail Versus Two-Tail Tests of Significance Directional versus Nondirectional hypotheses A directional hypothesis specifies exactly how (i.e., the direction) the results will turn out.
77
One-Tail Versus Two-Tail Tests of Significance Directional versus Nondirectional hypotheses A directional hypothesis specifies exactly how (i.e., the direction) the results will turn out. A nondirectional hypothesis does not specify exactly how the results will turn out.
78
One-Tail Versus Two-Tail Tests of Significance One-tail t test
79
One-Tail Versus Two-Tail Tests of Significance One-tail t test Evaluates the probability of only one type of outcome (based on directional hypothesis).
80
One-Tail Versus Two-Tail Tests of Significance One-tail t test Evaluates the probability of only one type of outcome (based on directional hypothesis). Two-tail t test Evaluates the probability of both possible outcomes (based on nondirectional hypothesis).
81
The Logic of Significance Testing Typically our ultimate interest is not in the samples we have tested in an experiment but in what these samples tell us about the population from which they were drawn.
82
The Logic of Significance Testing Typically our ultimate interest is not in the samples we have tested in an experiment but in what these samples tell us about the population from which they were drawn. In short, we want to generalize, or infer, from our samples to the larger population.
83
When Statistics Go Astray: Type I and Type II Errors Type I error
84
When Statistics Go Astray: Type I and Type II Errors Type I error Accepting the experimental hypothesis when the null hypothesis is true.
85
When Statistics Go Astray: Type I and Type II Errors Type I error (alpha) Accepting the experimental hypothesis when the null hypothesis is true. The experimenter directly controls the probability of making a Type I error by setting the significance level.
86
When Statistics Go Astray: Type I and Type II Errors Type I error (alpha) Accepting the experimental hypothesis when the null hypothesis is true. The experimenter directly controls the probability of making a Type I error by setting the significance level. You are less likely to make a Type I error with a significance level of.01 than with a significance level of.05
87
When Statistics Go Astray: Type I and Type II Errors The experimenter directly controls the probability of making a Type I error by setting the significance level. You are less likely to make a Type I error with a significance level of.01 than with a significance level of.05 However, the more extreme or critical you make the significance level to avoid a Type I error, the more likely you are to make a Type II (beta) error.
88
When Statistics Go Astray: Type I and Type II Errors Type II (beta) error
89
When Statistics Go Astray: Type I and Type II Errors Type II (beta) error A Type II error involves rejecting a true experimental hypothesis.
90
When Statistics Go Astray: Type I and Type II Errors Type II (beta) error A Type II error involves rejecting a true experimental hypothesis. Type II errors are not under the direct control of the experimenter.
91
When Statistics Go Astray: Type I and Type II Errors Type II (beta) error A Type II error involves rejecting a true experimental hypothesis. Type II errors are not under the direct control of the experimenter. We can indirectly cut down on Type II errors by implementing techniques that will cause our groups to differ as much as possible.
92
When Statistics Go Astray: Type I and Type II Errors Type II (beta) error A Type II error involves rejecting a true experimental hypothesis. Type II errors are not under the direct control of the experimenter. We can indirectly cut down on Type II errors by implementing techniques that will cause our groups to differ as much as possible. For example, the use of a strong IV and larger groups of participants.
93
Effect Size Effect size The magnitude or size of the experimental treatment.
94
Effect Size Effect size The magnitude or size of the experimental treatment. A significant statistical test tells us only that the IV had an effect; it does not tell us about the size of the significant effect.
95
Effect Size Effect size (Cohen’s d ) The magnitude or size of the experimental treatment. A significant statistical test tells us only that the IV had an effect; it does not tell us about the size of the significant effect. Cohen (1977) indicated that d values greater than.80 reflect large effect sizes.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.