Presentation is loading. Please wait.

Presentation is loading. Please wait.

ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong

Similar presentations


Presentation on theme: "ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong"— Presentation transcript:

1 ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong
Statistics Primer ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong General notes: This is a big-picture presentation. Given the time constraints, try not get lost in the details (exceeding the primer level) related to a concept that may potentially confuse the attendees. Stick to one practical example throughout so people can follow. (Reading achievement/ability?) Coordinate with Daniela to have the slides sent out to all attendees (both onsite and remote) prior to the event. Go over the work process on the white board even if it is available on the slides - goes for all computations, so bring a calculator Incorporate visual aids to content delivery as much as possible. Using the pointer to connect the content you are verbally delivering with the content on the slides for the attendees.  This will facilitate them to follow you both verbally and visually.

2 Quick Overview of Statistics

3 Descriptive vs. Inferential Statistics
Descriptive Statistics: summarize and describe data (central tendency, variability, skewness) Inferential Statistics: procedure for making inferences about population parameters using sample statistics Sample Population

4 Measures of Central Tendency
Mode: the most frequently occurring value in a distribution Select the value(s) with the highest frequency Median: the value representing the middle point of a distribution Order data Determine the median position = (n + 1) / 2 Locate the median based on step 2 Mean: the arithmetic average of a distribution Sum all the data values and divide by the number of values Σ𝑥 𝑛

5 Measures of Variability
Range: difference between the largest and smallest values in the data Mean deviation: measure of the average absolute deviations from the mean – uncommonly used These measures are not very descriptive of a distribution’s variability, need better measures… 5 ( 𝑋 𝐻 − 𝑋 𝐿 ) |Σ 𝑥 − 𝑥 | 𝑛

6 Measures of Variability Cont.
Sum of squares: sum of the squared deviation scores, used to compute variance and standard deviation Variance: the average squared deviations from the mean Standard deviation: square root of the variance - commonly used 𝑆𝑆= Σ 𝑥 − 𝑥 2 𝑠 2 = Σ 𝑥 − 𝑥 𝑛 −1 Example on the following slides 𝑠= Σ 𝑥 − 𝑥 𝑛 −1

7 Variance and Sum of Squares
Student 𝑥 (𝑥− 𝑥 ) (𝑥− 𝑥 ) 2 Girl #1 90 Girl #2 23 Girl #3 26 Boy #1 83 Boy #2 48 Boy #3 24 Average = Sum =

8 Empirical Rule The empirical rule states that symmetric or normal distribution with population mean μ and standard deviation σ have the following properties. Remember z-scores?

9 Sampling Distribution
Theoretical distribution of sample statistics (e.g., the mean, standard deviation, Pearson’s r), as opposed to individual scores NOT the same thing as a sample distribution or a population distribution Used to help generalize the findings of our sample statistics back to our populations Tough to understand, concrete example on next slide Provide an example (students’ reading achievement scores) to explain how one would be constructed theoretically, practical example on next slide

10 Sampling Distribution
All possible outcomes are shown below in Table 1. Table 1. All possible outcomes when two balls are sampled with replacement. Outcome Ball 1 Ball 2 Mean 1 1.0 2 1.5 3 2.0 4 5 6 2.5 7 8 9 3.0 We have a hat with 3 pool balls in it – 1, 2, and 3 – and we want to generate a sampling distribution of the mean for sample size = 2. What are all possible combinations of balls that could be drawn with replacement? Draw 2 balls, compute their mean, plot it on a line, create a distribution of means. This is a concrete example. Imagine doing this with IQ scores, math scores, or any other type of continuous variable – would be impossible to do, which is why it is a theoretical distribution, and we don’t actually create it.

11 Sampling Error As has been stated before, inferential statistics involve using a representative sample to make judgments about a population. Lets say that we wanted to determine the nature of the relationship between county and achievement scores among Texas students. We could select a representative sample of say 10,000 students to conduct our study. If we find that there is a statistically significant relationship in the sample we could then generalize this to the entire population. However, even the most representative sample is not going to be exactly the same as its population. Given this, there is always a chance that the things we find in a sample are anomalies and do not occur in the population that the sample represents. This error is referred as sampling error. Samples are not always exactly representative of the population – this is called sampling error. Sampling error is a result of using samples to approximate our populations of interest.

12 Sampling Error A formal definition of sampling error is as follows:
Sampling error occurs when random chance produces a sample statistic that is not equal to the population parameter it represents. Due to sampling error there is always a chance that we are making a mistake when rejecting or failing to reject our null hypothesis. Remember that inferential procedures are used to determine which of the statistical hypotheses is true. This is done by rejecting or failing to reject the null hypothesis at the end of a procedure.

13 Sampling Distribution and Standard Error (SE)
Watch video up to about 2:45.

14 Hypothesis Testing Null Hypothesis Significance Testing (NHST)
Testing p-values using statistical significance tests (image from cnx.org) Effect Size Measure magnitude of the effect (e.g., Cohen’s d) NHSST – z-test, t-test, ANOVA, etc. Many effect size measures, eta squared, r squared, Cohen’s d, etc. Talk about confidence intervals here?

15 Null Hypothesis Significance Testing
Statistical significance testing answers the following question: Assuming the sample data came from a population in which the null hypothesis is exactly true, what is the probability of obtaining the sample statistic one got for one’s sample data with the given sample size? (Thompson, 1994) Alternatively: Statistical significance testing is used to examine a statement about a relationship between two variables. Under Alternatively: discuss causal versus correlational relationships?

16 Hypothetical Example Is there a difference between the reading abilities of boys and girls? Null Hypothesis (H0): There is not a difference between the reading abilities of boys and girls. Alternative Hypothesis (H1): There is a difference between the reading abilities of boys and girls. Alternative hypotheses may be non-directional (above) or directional (e.g., boys have a higher reading ability than girls). Formulate 2 hypotheses regarding this hypothesis: the null and the alternative. Null hypothesis always assumes no relationship between the variables. Alternative hypotheses’ directionality is dependent on theory – must have good theoretical reason to hypothesize directionality.

17 Testing the Hypothesis
Use a sampling distribution to calculate the probability of a statistical outcome. pcalc = likelihood of the sample’s result pcalc < pcritical: reject H0 pcalc ≥ pcritical: fail to reject H0 This slide assumes the sampling distribution is discussed elsewhere… P-calc --- the likelihood of an outcome occurring

18 Level of Significance (pcrit)
Alpha level (α) determines: The probability at which you reject the null hypothesis The probability of making a Type I error (typically .05 or .01) True Outcome in Population Reject H0 is true H0 is false Observed Outcome Reject H0 Type I error (α) Correct Decision Fail to reject H0 Type II error (β)

19 Example: Independent t-test
Research Question: Is there a difference between the reading abilities of boys and girls? Hypotheses: H0: There is not a difference between the reading abilities of boys and girls. H1: There is a difference between the reading abilities of boys and girls.

20 Dataset Reading test scores (out of 100) Boys Girls 88 82 90 70 95 92
81 80 93 71 86 73 79 85 89 87

21 Significance Level α = .05, two-tailed test df = n1 + n2 – 2
= – 2 = 18 Use t-table to determine tcrit tcrit = ±2.101 Explain what df is?

22 Decision Rules If tcalc > tcrit, then pcalc < pcrit Reject H0
Fail to reject H0 -2.101 2.101 p = .025

23 Computations Boys Girls Frequency (N) 10 Sum (Σ) 807 881 Mean ( 𝑋 )
80.70 88.10 Variance (S2) 55.34 26.54 Standard Deviation (S) 7.44 5.15 Skip most computational stuff. Algebra they can read on their own/computer does most of the work for them anyway.

24 Computations cont. Pooled variance Standard Error = = 2.862

25 Computations cont. = -2.586 𝑡= 𝑋 1 − 𝑋 2 𝑆𝐸 𝑋 1 − 𝑋 2 Compute tcalc
Decision: Reject H0. Girls scored statistically significantly higher on the reading test than boys did. 𝑡= 𝑋 1 − 𝑋 2 𝑆𝐸 𝑋 1 − 𝑋 2 = Refer back to means and bell curve if necessary to explain the decision.

26 Confidence Intervals CI95 = 𝑥 ± tcrit (SE)
Sample means provide a point estimate of our population means. Due to sampling error, our sample estimates may not perfectly represent our populations of interest. It would be useful to have an interval estimate of our population means so we know a plausible range of values that our population means may fall within. 95% confidence intervals do this. Can help reinforce the results of the significance test. CI95 = 𝑥 ± tcrit (SE) = -7.4 ± 2.101(2.862) = [ , ]

27 Statistical Significance vs. Importance of Effect
Does finding that p < .05 mean the finding is relevant to the real world? Not necessarily… Effect size provides a measure of the magnitude of an effect Practical significance Cohen’s d, η2, and R2 are all types of effect sizes Watch about 7 minutes of the video Effect size - Can have a statistically significant effect that has no practical implications for the real world, or can have a non-significant effect that has a large effect size but was ns due to sample size or other reasons. Important to look at ES’s. and NHSST results. Each is only a piece of the puzzle, and need to look at both to better understand the whole picture.

28 Cohen’s d = -1.16 Equation: Guidelines:
d = .2 = small d = .5 = moderate d = .8 = large Not only is our effect statistically significant, but the effect size is large. = -1.16 Standardized mean difference.


Download ppt "ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong"

Similar presentations


Ads by Google