Download presentation
Presentation is loading. Please wait.
1
Unit 9: Dealing with Messy Data I: Case Analysis
2
Anscombe’s Quartet lm(y1 ~ x, data = Quartet) Coefficients
Estimate SE t-statistic Pr(>|t|) (Intercept) * x ** --- Sum of squared errors (SSE): 13.8, Error df: 9 R-squared: lm(y2 ~ x, data = Quartet) Estimate SE t-statistic Pr(>|t|) (Intercept) * x ** R-squared: Anscombe, Francis J. (1973) Graphs in statistical analysis. American Statistician, 27, 17–21. see Quartet dataframe in car package
3
Case Analysis Goal is to identify any unusual or excessively influential data These data point may either bias results and/or reduce power to detect effects (inflate standard errors and/or decrease R2) Three aspects of individual observations we attend to: Leverage Regression Outlier Influence Case Analysis also provides an important first step as you get to “know” your data.
4
Case Analysis: Unusual and Influential Data
setwd('P:\\CourseWebsites\\PSY710\\Data\\Diagnostics') d1 = dfReadDat ('DOSE2.dat') d1$Sex = as.numeric(d1$Sex) - 1.5 m1= lm(SP ~ BAC + TA + Sex, data=d1) modelSummary(m1) Coefficients Estimate SE t-statistic Pr(>|t|) (Intercept) ** BAC * TA e-05 *** Sex ** --- Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Sum of squared errors (SSE): , Error df: 92 R-squared:
5
Univariate Statistics and Graphs
1: Univariate Statistics (n’s, means, sd, min/max, shape) varDescribe(d1) var n mean sd median min max skew kurtosis BAC TA Sex FPS
6
Univariate Statistics and Graphs
2: Univariate Plots (histograms, rug, and density plots) varPlot(d1$FPS, ‘FPS’) See also: hist(), rug(), density() "Descriptive statistics: FPS" n mean sd median min max skew kurtosis
7
Bivariate Correlations
> corr.test(d1) Correlation matrix BAC TA Sex FPS BAC TA Sex FPS
8
Univariate Statistics and Graphs
3: Bivariate Plots (Scatterplot, Rug, & Density) spm(~FPS + BAC + TA + Sex, data=d1)
9
Leverage (Cartoon data)
4. Check for high Leverage points Leverage is a property of the predictors (DV is not considered for leverage analysis). An observation will have increased “leverage” on the results as its distance from the mean of all predictors increases. Which points have the most leverage in the 1 predictor example below?
10
Leverage Hat values (hi) provide an index of leverage. In the one predictor case hi = /N + (Xi – X)2 / Σ(Xj- X)2 With multiple predictors, hi measures the distance from the centroid (point of means) of the Xs. Hat values are bounded between 1/N and 1. The mean Hat value is P/N Rules of thumb hi > 3 * h for small samples (< 100) hi > 2 * h for large samples Do NOT blindly apply rules of thumb. Hat values should be separated from distribution of hi. View a histogram of hi NOTE: Mahalanobis (Maha) distance = (N - 1)(hi - 1/N). SPSS reports centered leverage (h - 1/N)
11
Leverage (Cartoon data)
High leverage values are not always bad. In fact, in some cases they are good. Must also consider if they are regression outliers. WHY? R2 = SSE(Mean-only) – SSE(A) SSE(Mean-only) SEbi = sy (1-R2Y) — * ———— * ———— si (N-k-1) (1-R2i) High leverage points that are fit well by model increase the difference between SSE(Mean-only) and SSE(A) which increases R2 High leverage points that are fit well also increase variance for predictor. This reduces the SE for predictors and yields more power. Well fit, high leverage points do NOT alter b’s
12
Leverage (Real Data) modelCaseAnalysis(m1, Type='hatvalues')
13
Regression Outlier (Cartoon data)
5. Check for Regression Outliers An observation that is not adequately fit by the regression model (i.e., falls very far from the prediction line) In essence, a regression outlier is a discrepant score with a large residual (ei). Which point(s) are Regression Outliers?
14
Regression Outlier There are multiple quantitative indicators to identify regression outliers including raw residuals (ei), standardized residuals (e'i), and studentized residuals (t'i ). The preferred index is the studentized residual. t'i = ei / (SEe(-i) * (1-hi)) t'i follows a t-distribution with n-P-1 degrees of freedom Can use Bonferroni correction to test t’s for the studentized residuals. But again, not blindly. Should view a histogram of t'i . NOTE: SPSS calls these Studentized Deleted Residuals. Cohen calls these Externally Studentized Residual
15
Regression Outliers (Cartoon data)
Regression outliers are always bad but they can have two different types of bad effects. WHY R2 = SSE(Mean-only) – SSE(A) SSE(Mean-only) SEbi = sy (1-R2Y) — * ———— * ———— si (N-k-1) (1-R2i) Regression outliers increase SSE(A) which decreases R2. Decreased R2 leads to increased SEs for b’s. If outlier also has leverage can alter (increase or decrease) b’s
16
Regression Outlier (Real Data)
modelCaseAnalysis(m1, Type='residuals')
17
Regression Outlier (Real Data)
outlierTest(m1, cutoff= .05) rstudent unadjusted p-value Bonferonni p e
18
Influence (Cartoon data)
An observation is “influential if it substantially alters the fitted regression model (i.e., the coefficients and/or intercept). Two commonly used assessment methods: Cooks distance dfBetas Which point(s) have the most Influence?
19
Cook's Distance Cook’s distance (Di) provides a single summary statistic to index how much influence each score has on the overall model. Cooks distance is based on both the “outlierness” (standardized residual) and leverage characteristics of the observation. Di = (E'i2 / P) * (hi / (1-hi)) Di > 4 / (N – P) has been proposed as a very liberal cutoff (identifies a lot of influential points). Di > qf(.5,P,N-P) has also been employed as very conservative. Identification of problematic scores should be considered in the context of the overall distribution of Di
20
Cook's Distance (Real Data)
modelCaseAnalysis(m1, Type='cooksd')
21
Influence Bubble Plot (Real Data)
modelCaseAnalysis(m1,Type='influenceplot') What are the expected effects of each of these points on the model?
22
dfBetas dfBetaij is an index of how much each regression coefficient (j= 0 – k) would change if the ith score was deleted. dfBetaij = bj – bj(-1) dfBetas (preferred) is the standardized form of the index dfBetas = dfBeta / SE bj(-i) |dfBetas| > 2 may be problematic. |dfBetas| > (2 / N) in larger samples (Belsley et al., 1980) Consider distribution with histogram! Also can visualize with added variable plot Problem is there can be many dfBetas (a set for each predictor and intercept). Most helpful when there is one “critical/focal effect.”
23
dfBetas (Real Data) lm.caseAnalysis(m1,Type='dfbetas')
24
Added Variable Plot (Real Data)
25
Impact on SEs In addition to altering regression coefficients (and reducing R2), problematic scores can increase the SEs (i.e., precision of estimation) of the regression coefficients. COVRATIO is an index that indicates how individual scores affect the overall precision of estimation (joint confidence region for set of coefficients) of the regression coefficients Observations that decrease the precision of estimation have COVRATIOS < 1.0. Belsley et al., (1980) proposed a cut off of: COVRATIOi < | 3* P/N -1 |
26
Impact on Ses (Real Data)
modelCaseAnalysis(m1,Type='covratio')
27
Enter the Real World So what do you do????
28
Overall Impact of Problem Scores: Real Data
Coefficients Estimate SE t-statistic Pr(>|t|) (Intercept) ** BAC * TA e-05 *** Sex ** --- Sum of squared errors (SSE): , Error df: 92 R-squared: d2 = lm.removeCases(d1,c('0125')) m2 = lm(SP~BAC + BaseSTL + Sex, data=d2) summary(m2) Estimate SE t-statistic Pr(>|t|) (Intercept) *** BAC ** TA e-06 *** Sex ** Sum of squared errors (SSE): , Error df: 91 R-squared:
29
Four Examples with Fake Data
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.