Download presentation
1
STAT 101 Dr. Kari Lock Morgan Exam 2 Review
2
Exam Details Wednesday, 4/2
Closed to everything except two double-sided pages of notes and a non-cell phone calculator page of notes should be prepared by you – no sharing Okay to use materials from class for your page of notes Best ways to prepare: #1: WORK LOTS OF PROBLEMS! Make a good page of notes Read sections you are still confused about Come to office hours and clarify confusion Cumulative, but emphasis is on material since Exam 1 (Chapters 5-9, we skipped 8.2 and 9.2)
3
Practice Problems Practice exam online (under resources)
Solutions to odd essential synthesis and review problems online (under resources) Solutions to all odd problems in the book on reserve at Perkins
4
Office Hours and Help Monday 3 – 4pm: Prof Morgan, Old Chem 216
Monday 4–6pm: Stephanie Sun, Old Chem 211A Tuesday 3–5pm (extra): Prof Morgan, Old Chem 216 Tuesday 5-7pm: Wenjing Shi, Old Chem 211A Tuesday 7-9pm: Mao Hu, Old Chem 211A REVIEW SESSION: 5–6 pm Tuesday, Social Sciences 126
5
Stat Education Center Reminder: the Stat Education Center in Old Chem 211A is open Sunday – Thurs 4pm – 9pm with stat majors and stat PhD students available to answer questions
6
Two Options for p-values
We have learned two ways of calculating p-values: The only difference is how to create a distribution of the statistic, assuming the null is true: Simulation (Randomization Test): Directly simulate what would happen, just by random chance, if the null were true Formulas and Theoretical Distributions: Use a formula to create a test statistic for which we know the theoretical distribution when the null is true, if sample sizes are large enough
7
Two Options for Intervals
We have learned two ways of calculating intervals: Simulation (Bootstrap): Assess the variability in the statistic by creating many bootstrap statistics Formulas and Theoretical Distributions: Use a formula to calculate the standard error of the statistic, and use the normal or t-distribution to find z* or t*, if sample sizes are large enough
8
Pros and Cons Simulation Methods PROS:
Methods tied directly to concepts, emphasizing conceptual understanding Same procedure for every statistic No formulas or theoretical distributions to learn and distinguish between Minimal math needed CONS: Need entire dataset (if quantitative variables) Need a computer Newer approach
9
Pros and Cons Formulas and Theoretical Distributions PROS:
Only need summary statistics Only need a calculator More commonly used CONS: Plugging numbers into formulas does little for conceptual understanding Many different formulas and distributions to learn and distinguish between Harder to see the big picture when the details are different for each statistic Doesn’t work for small sample sizes Requires more math and background knowledge
10
Accuracy The accuracy of simulation methods depends on the number of simulations (more simulations = more accurate) The accuracy of formulas and theoretical distributions depends on the sample size (larger sample size = more accurate) If the sample size is large and you have generated many simulations, the two methods should give essentially the same answer
11
Data Collection Was the explanatory variable randomly assigned?
Was the sample randomly selected? Yes No Yes No Possible to generalize to the population Should not generalize to the population Possible to make conclusions about causality Can not make conclusions about causality
12
Variable(s) Visualization Summary Statistics Categorical bar chart,
pie chart frequency table, relative frequency table, proportion Quantitative dotplot, histogram, boxplot mean, median, max, min, standard deviation, z-score, range, IQR, five number summary Categorical vs Categorical side-by-side bar chart, segmented bar chart two-way table, difference in proportions Quantitative vs Categorical side-by-side boxplots statistics by group, difference in means Quantitative vs Quantitative scatterplot correlation, simple linear regression
13
Confidence Interval A confidence interval for a parameter is an interval computed from sample data by a method that will capture the parameter for a specified proportion of all samples A 95% confidence interval will contain the true parameter for 95% of all samples
14
Hypothesis Testing How unusual would it be to get results as extreme (or more extreme) than those observed, if the null hypothesis is true? If it would be very unusual, then the null hypothesis is probably not true! If it would not be very unusual, then there is not evidence against the null hypothesis
15
p-value The p-value is the probability of getting a statistic as extreme (or more extreme) as that observed, just by random chance, if the null hypothesis is true The p-value measures evidence against the null hypothesis
16
Hypothesis Testing State Hypotheses
Calculate a test statistic, based on your sample data Create a distribution of this test statistic, as it would be observed if the null hypothesis were true Use this distribution to measure how extreme your test statistic is
17
Distribution of the Sample Statistic
Sampling distribution: distribution of the statistic based on many samples from the population Bootstrap Distribution: distribution of the statistic based on many samples with replacement from the original sample Randomization Distribution: distribution of the statistic assuming the null hypothesis is true Normal, t,2, F: Theoretical distributions used to approximate the distribution of the statistic
18
Sample Size Conditions
For large sample sizes, either simulation methods or theoretical methods work If sample sizes are too small, only simulation methods can be used
19
Using Distributions For confidence intervals, you find the desired percentage in the middle of the distribution, then find the corresponding value on the x-axis For p-values, you find the value of the observed statistic on the x-axis, then find the area in the tail(s) of the distribution
20
Confidence Intervals
21
Confidence Intervals Return to original scale with
22
Hypothesis Testing
23
General Formulas When performing inference for a single parameter (or difference in two parameters), the following formulas are used:
24
General Formulas For proportions (categorical variables) with only two categories, the normal distribution is used For inference involving any quantitative variable (means, correlation, slope), if categorical variables only have two categories, the t distribution is used
25
Standard Error The standard error is the standard deviation of the sample statistic The formula for the standard error depends on the type of statistic (which depends on the type of variable(s) being analyzed)
26
Standard Error Formulas
Parameter Distribution Standard Error Proportion Normal Difference in Proportions Mean t, df = n – 1 Difference in Means t, df = min(n1, n2) – 1 Correlation t, df = n – 2
27
Multiple Categories These formulas do not work for categorical variables with more than two categories, because there are multiple parameters For one or two categorical variables with multiple categories, use 2 tests (goodness of fit for one categorical variable, test for association for two) For testing for a difference in means across multiple groups, use ANOVA
28
Chi-Square Test for Goodness of Fit
State null hypothesized proportions for each category, pi. Alternative is that at least one of the proportions is different than specified in the null. Calculate the expected counts for each cell as npi . Make sure they are all greater than 5 to proceed. Calculate the 2 statistic: Compute the p-value as the area in the tail above the 2 statistic, for a 2 distribution with df = (# of categories – 1) Interpret the p-value in context.
29
Chi-Square Test for Association
H0 : The two variables are not associated Ha : The two variables are associated Calculate the expected counts for each cell: Make sure they are all greater than 5 to proceed. Calculate the 2 statistic: Compute the p-value as the area in the tail above the 2 statistic, for a 2 distribution with df = (r – 1) (c – 1) Interpret the p-value in context.
30
Analysis of Variance Analysis of Variance (ANOVA) compares the variability between groups to the variability within groups Total Variability Variability Between Groups Variability Within Groups
31
ANOVA Table Source Groups Error Total df k-1 n-k n-1 Sum of Squares
SSG SSE SST Mean Square MSG = SSG/(k-1) MSE = SSE/(n-k) F Statistic MSG MSE p-value Use Fk-1,n-k
32
Simple Linear Regression
Simple linear regression estimates the population model with the sample model:
33
Inference for the Slope
Confidence intervals and hypothesis tests for the slope can be done using the familiar formulas: Population Parameter: 1, Sample Statistic: 𝛽 1 Use t-distribution with n – 2 degrees of freedom
34
Intervals A confidence interval has a given chance of capturing the mean y value at a specified x value (the point on the line) A prediction interval has a given chance of capturing the y value for a particular case at a specified x value (the actual point)
35
Conditions for SLR Inference based on the simple linear model is only valid if the following conditions hold: Linearity Constant Variability of Residuals Normality of Residuals
36
Inference Methods
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.