AP Stat Review Know Your Book! Mr. Lynch AP Statistics
Chapter 1 Describe distribution: CSSO* – one variable Box & Whisker, Histogram, Stem-Leaf, Bar graphs, Dot plots, Pie charts Categorical vs. Quantitative Symmetric vs. Skewing Relative Freq. vs. Cumulative Freq. (percentiles) 5 # Summaries – (Min,Q1,Med,Q3,Max) *Standard Deviation vs. Variance *Linear Transformations
Chapter 2 Density Curves Normal Distributions “z” Don’t say a distribution is “normal” just b/c it looks symmetric or unimodal. Mean vs. Median Skewing Note: mean>median is not sufficient evidence to show that distribution is skewed right! Empirical Rule (68-95-99.7 Rule) Population parameters vs. Sample statistics
Chapter 3 Bi-Variate data – 2 variables - scatter plots Explanatory vs. Response variables Describe distribution: FSDD* Correlation coefficient (r) - (strength) LSRL y(hat) = a + bx (don’t confuse r with b*) Coefficient of determination (r2) - % Variation Residuals = Actual – Predicted Residual Plots – randomly distributed? Fit? Know b,r,r2 in context!
Chapter 4 Linear Transformations Exponential Models (x, logy) makes linear Power Models (logx, logy) makes linear Correlation & regression only for linear data Interpolation vs. Extrapolation Lurking variables Common Response/Confounding/Causation 2-way tables – (conditional distributions)
Chapter 5 SRS* Experiments vs. Samples (surveys) SRS ≠ Random Assignment – experiment to assign subjects to treatment group Stratified Random Sample vs. Blocking Experimental groups Blocking: grouping subjects in an experiment based on a characteristic of interest, reduces unwanted variability Well Designed Experiments – can use cause-effect language) Control/Randomization/Replication*
Chapter 5 (cont’d) Bias – “favoring” Voluntary Response Undercoverage vs. Non-response Larger samples more accurate results Double blind* Confounding Variable – could contribute to the observed “causal link” b/w variables Matched pairs Simulation (PAARC or DAARC)* Cluster Sampling
Chapter 6 Probability Chapter Disjoint (Mutually Exclusive) – No common outcomes Independent – One outcome doesn’t effect the other outcome General Addition Rule (#6) Conditional Probability Rule (#8) Tree Diagrams – Bayes’s Rule Law of Large Numbers Two-Way Tables Again!
Chapter 7 Random Variables Probability distributions Discrete vs. Continuous* Finite vs. Infinite* Expected value = mean Transformation Formulas:
Chapter 8 Binomial and Geometric Distributions FIST* – verify these conditions B(n,p,k) Fixed # of trials Mean=µ=np st.dev =√ (npq) Normal Approx for large sample sizes Rule of Thumb: np≥10, nq≥10 G(p,k) First success – infinite # of trials mean=1/p
Chapter 9 Sampling Distributions Parameters vs. Statistics Sample, Population, Sampling Distribution Bias/Variability (high bias/low var. graphs) center/spread Central Limit Theorem – (means)*
Inference 10-14 Confidence Intervals and Tests of Signif. Ch 10: means/ “z” b/c known pop. st.dev. Ch 11: means/ “t”, differences Ch 12: proportions/ “z”, differences p1 vs. p2 Ch 13: Chi-Square (GoF, Association, H?) Ch 14: Regression for Slope (b)
Type I Error, Type II Error, Power of a Test Type I Error: Rejecting H0/when H0 is true P(I) = α α = significance level of test Type II Error: Accepting H0/when H0 is false P(II) = β Power of a Test: Probability of correctly rejecting a null hypothesis (H0) Power = 1 – P(II) = 1 – β How to make your test more Powerful? Increase n, increase α, (decrease σ, move Ha further away from H0)
Too Many P’s! Ch 6: “the probability of” P(A)=1/2 Ch 8: “the specific probability of a success” p=.75 p = the population proportion p(hat) = the sample proportion P = “the p-value” area under the curve in the tail, often compared to alpha and rejection regions