Unit 6: Inferences with 2 Predictors
Multiple Regression 2+ Predictors Consider how the concepts we have discussed so far generalize to the 2 predictor (3 parameter) model Will start with 2 quantitative predictors example. Markus will continue with 1 quantitative and 1 dichotomous predictor example Learn how to quantify, test, and interpret ‘partial’ effects: bj R2, p2 Multicollinearity Text, table, and figure descriptions of results Generalization to > 2 predictors is straightforward (Markus; Unit 7)
Benefits of Multiple Predictors Statistical power: Goal is to increase power to test focal predictor’s effect on DV by adding it to model that contains additional known predictors of DV. Additional explanatory power: Goal is to demonstrate that focal predictor adds explanatory power above and beyond other predictor(s). Mediation: We have identified a known cause of a DV. We add a new focal predictor to test if the effect of our known causal IV on the DV is mediated by our focal predictor (i.e., identify “mechanism” of IV effect).
Alcohol and Stress Response Dampening (SRD) Test for Alcohol “Stress response dampening” Manipulate BAC (0.00% - 0.15%) Stressor Task (threat of unpredictable shock) Measure Stress Response (Fear potentiated startle)
Two Parameter (1 Predictor) Model Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.457 6.548 6.484 4.11e-09 *** BAC -184.092 95.894 -1.920 0.0579 . FPSi = 42.5 + -184.1* BACi Describe the interpretation of b1 (coefficient for BAC) and its significance test b1 “describes” the relationship between BAC and FPS in the units of each measure. FPS will decrease by 184 µV for every 1% increase in BAC (It will decrease by 1.84µV for every .01% increase in BAC). The significance test for 1 tests the null hypothesis that the population relationship between BAC and FPS is 0 (i.e., 1 = 0, no relationship). We fail to reject this H0
Testing Inferences about 1 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.457 6.548 6.484 4.11e-09 *** BAC -184.092 95.894 -1.920 0.0579 . H0: 1 = 0 Ha: 1 <> 0 What could we change about the sampling distribution that would make this b1 be less probable given H0 so that we reject the Null If the standard deviation of the sampling distribution (its standard error) was smaller so that the distribution was narrower, b1 would be less probable given H0
Standard Errors of GLM Coefficients The formula for the standard error for a regression coefficient bj in multiple regression is: SEbj = sy (1-R2Y) 1 — * ———— * ———— sj (N-P) (1-R2j) If we increase R2Y, we would decrease the SE for our regression coefficient. NOTE: Formula for the standard error for b0 is different. For the one predictor model it is:
Model Comparison: Testing Inferences about 1 H0: 1 = 0; Ha: 1 <> 0 What two models are you comparing when you test hypotheses about 1 for BAC? Describe the logic. Compact Model: FPSi = 0 + 0 * BACi PC = 1 SSE(C) = 133888.3 Augmented Model: FPSi = 0 + 1 * BACi PA = 2 SSE(A) = 128837.1 F(PA-PC, N-PA) = SSE(C) – SSE(A) / (PA-PC) SSE(A) / (N-PA) F(1,94) = 3.685383, p = 0.05792374 ^
Model Comparison: Testing Inferences about 1 F(PA-PC, N-PA) = SSE(C) – SSE(A) / (PA-PC) SSE(A) / (N-PA) What could you change from this model comparison perspective to increase F and probability to reject the H0 about 1 Make SSE(A) smaller by explaining more variance in Yi. Of course, R2 = SSE(Mean-only) – SSE(A) SSE(Mean-Only) If you decrease SSE(A) or increase model R2, you will have more power to reject H0 regarding parameter estimates.
Two Parameter (1 Predictor) Model Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.457 6.548 6.484 4.11e-09 *** BAC -184.092 95.894 -1.920 0.0579 . Residual standard error: 37.02 on 94 degrees of freedom Multiple R-squared: 0.03773, Adjusted R-squared: 0.02749 F-statistic: 3.685 on 1 and 94 DF, p-value: 0.05792 What can we do analytically to decrease SSE (increase model R2) Include another predictor (‘covariate’) in the model that accounts for additional variance in Y (reduces SSE). Ideally, this covariate should be orthogonal (uncorrelated) with the other predictors (BAC). In this experiment, I measured other predictors of stress response (Trait anxiety, Sex). Each are robust predictors of FPS. Neither should be correlated with BAC because I manipulated BAC.
Open, View, & Preliminary Checks > d= lm.readDat(‘6_TwoPredictors_FPS') > some(d) BAC TA Sex FPS 0121 0.0000 254 male 33.75778 0123 0.0000 249 male 41.55706 1126 0.0395 25 male 14.18134 1116 0.0455 142 male 12.98316 1022 0.0535 329 female 99.94457 1123 0.0540 96 male 13.76578 2016 0.0750 445 female 30.45311 2023 0.0925 119 female 19.59872 3014 0.1000 105 female 15.02083 3012 0.1240 76 female 10.73328
Open, View, Preliminary Checks > str(d) 'data.frame': 96 obs. of 4 variables: $ BAC: num 0 0 0 0 0 0 0 0 0 0 ... $ TA : int 110 120 35 119 26 103 52 34 208 34 ... $ Sex: Factor w/ 2 levels "female","male": 2 2 2 2 2 1 2 $ FPS: num -98.098 -22.529 0.463 1.194 2.728 ... > d$Sex = NULL 'data.frame': 96 obs. of 3 variables:
Open, View, Preliminary Checks lm.describeData(d, detail=2) var n mean sd median min max skew kurtosis BAC 1 96 0.06 0.04 0.06 0.0 0.14 -0.09 -1.09 TA 2 96 147.61 105.73 119.00 10.0 445.00 0.89 -0.06 FPS 4 96 32.19 37.54 19.46 -98.1 162.74 0.62 1.93
Open, View, Preliminary Checks options (device=windows) par('cex' = 1.5, 'lwd'=2) lm.figSum(d$FPS, 'FPS‘) lm.figSum(d$BAC, 'BAC') lm.figSum(d$TA, 'TA')
Open, View, Preliminary Checks > corr.test(d) Correlation matrix BAC TA FPS BAC 1.00 -0.02 -0.19 TA -0.02 1.00 0.44 FPS -0.19 0.44 1.00 Sample Size BAC TA FPS BAC 96 96 96 TA 96 96 96 FPS 96 96 96 Probability values (Entries above the diagonal are adjusted for multiple tests.) BAC TA FPS BAC 0.00 0.87 0.12 TA 0.87 0.00 0.00 FPS 0.06 0.00 0.00
Open, View, Preliminary Checks > plot(d$BAC, d$FPS)
Open, View, Preliminary Checks > plot(d$TA, d$FPS)
Open, View, Preliminary Checks > plot(d$BAC, d$TA)
Open, View, Preliminary Checks > spm(d)
The 2 Predictor and General Linear Models DATA = MODEL + ERROR 2 Predictor Model for Sample Data Yi = b0 + b1X1 + b2X2 + ei Y = b0 + b1X1 + b2X2 k Predictor Model for Sample Data Yi = b0 + b1X1 + … + bkXk + ei Y = b0 + b1X1 + ... + bkXk
Testing BAC in a 3 Parameter Model (2 Predictors) m3 = lm(FPS ~ BAC + TA, data = d) summary(m3) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.43122 7.65933 2.537 0.0128 * BAC -177.04935 86.58046 -2.045 0.0437 * TA 0.15332 0.03243 4.727 8.07e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ^ FPS = 19.4 + -177.0 * BAC + 0.2 * TA What parameter estimate is used to test our research question about the effect of BAC? What are our H0 and Ha for the associated population parameter? We use b1 (-177.0) to test our hypothesis about the population effect of BAC (1). H0: 1 = 0 Ha: 1 <>0
Testing BAC in a 3 Parameter Model (2 Predictors) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.43122 7.65933 2.537 0.0128 * BAC -177.04935 86.58046 -2.045 0.0437 * TA 0.15332 0.03243 4.727 8.07e-06 *** Describe conclusion and logic of the test of H0: 1 = 0 from sampling distribution perspective If H0 is true, we expect a sampling distribution for b1 to have a mean of 0 and an SE of 86.6. A sample b1 = -177.0 is unlikely (about 2 standard deviations below mean; p = .0437). Therefore we reject our H0 and conclude that 1 <>0. Conclusion is that BAC affects FPS
Testing BAC in a 3 Parameter Model (2 Predictors) Describe conclusion and logic of the test of H0: 1 = 0 from model comparison perspective H0: 1 = 0; Ha: 1 <> 0 Compact Model FPS = 0 + 0 * BAC + 2 * TA SSE(C) = 108,550.6 PC = 2 Augmented Model FPS = 0 + 1 * BAC + 2 * TA PC = 3 F (PA – PC, N – PA) = (SSE(C) -SSE(A)) / (PA-PC) SSE(A) / (N-PA) F(1, 93) = 4.19, p = .0436
Testing BAC in a 3 Parameter Model (2 Predictors) Two parameter model test of BAC Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.457 6.548 6.484 4.11e-09 *** BAC -184.092 95.894 -1.920 0.0579 . Residual standard error: 37.02 on 94 degrees of freedom Multiple R-squared: 0.03773, Adjusted R-squared: 0.02749 F-statistic: 3.685 on 1 and 94 DF, p-value: 0.05792 Three parameter model test of BAC Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.43122 7.65933 2.537 0.0128 * BAC -177.04935 86.58046 -2.045 0.0437 * TA 0.15332 0.03243 4.727 8.07e-06 *** Residual standard error: 33.42 on 93 degrees of freedom Multiple R-squared: 0.2242, Adjusted R-squared: 0.2075 F-statistic: 13.44 on 2 and 93 DF, p-value: 7.489e-06 What changed about test of 1 (BAC effect) and why?
Standard Error of Partial Regression Coefficient (bj) t(N-P) = bj – 0 SEbj SEbj = sy (1-R2Y) 1 — * ———— * ———— sj (N-P) (1-R2j) R2j = variance in Xj accounted for by all other predictors in model (i.e., how redundant is Xj in model?) What happens to SEbj as model R2 increases (holding other factors constant)? SEbj decreases as model R2 increases. In other words, the sampling distribution gets narrower. What happens to significance test of bj as SEbj decreases (holding other factors constant) t increases and associated p-value decreases (More power!)
Sampling Distributions and Power t(N-P) = bj – 0 SEbj t(96-2) = -184.1 – 0 95.9 t(94) = 1.92, p = .0579 t(N-P) = bj – 0 SEbj t(96-3) = -177.1 – 0 86.6 t(93) = 2.05, p = .0436
Sampling Distributions and Precision CI (b) = b + t (;N-P) * SEb confint(m2) 2.5 % 97.5 % (Intercept) 29.45597 55.45772 BAC -374.49261 6.30872 Δ380.8013 confint(m3) 2.5 % 97.5 % (Intercept) 4.22130 34.6411 BAC -348.98099 -5.1177 Δ343.86 TA 0.08892 0.2177
Standard Error of Partial Regression Coefficient (bj) t(N-P) = bj – 0 SEbj SEbj = sy (1-R2Y) 1 — * ———— * ———— sj (N-P) (1-R2j) R2j = variance in Xj accounted for by all other predictors in model (i.e., how redundant is Xj in model?) What other factors affect SE for regression coefficients and how? Increasing N decreases SE (increases power) Increasing P increases SE (decreases power) Increasing sY increases SE (decreases power) Increasing sJ decreases SE (increases power) Increasing R2j increases SE (decreases power)
R2j and Multicollinearity t(N-P) = b – 0 SEb CI (b) = b + t (;N-P) * SEb This decrease in power and precision for model parameters (regression coefficients) associated with redundancy among the predictors is called the problem of Multicollinearity.
R2j and Multicollinearity It is NOT sufficient to examine only bivariate correlations among predictors. To determine if a problem exists, calculate Variance Inflation Factors (VIF) for each predictor. VIFj= 1 / (1-R2j) VIF tells you how much SEbj is increased b/c of redundancy. VIFs > 5 are considered problematic (SE increased by factor of 2.2)
R2j and Multicollinearity Use vif() to calculate VIFs vif(m3) BAC TA 1.000296 1.000296 SPSS users will be more familiar with Tolerance of Xi = 1 – R2j As tolerance decreases toward 0, multicollinearity increases.
R2j and Multicollinearity Solutions for problems with Multicollinearity include: Drop redundant variable Factor analysis (e.g., PCA) to produce factors that reflect major sources of variance among the redundant predictors This is only a problem for the variables in the model with high VIFs. If you don’t care about testing them, this is not a problem.
Power and SSE in 2 and 3 Parameter Models Two Parameter Model Test of BAC Compact Model Augmented Model FPS= 32.2 + 0 * BAC FPS= 42.5 + -184.1 * BAC SSE(C) = 133,888.3 SSE(A) = 128,837.1 PC = 1 PA = 2 Three Parameter Model Test of BAC FPS= 9.4 + 0 * BAC + 0.2 * TA FPS = 19.4 + -177.1 * BAC + 0.2*TA SSE(C) = 108,550.6 SSE(A) = 103,875.8 PC = 2 PA = 3 F (PA – PC, N – PA) = (SSE(C) - SSE(A)) / (PA-PC) SSE(A) / (N-PA) ^ ^ ^ ^ How can you see the increase in power from the model comparison perspective?
Power and SSE in 2 and 3 Parameter Models Two Parameter Model Test of BAC F (2 – 1, 96 – 2) = (133,888.3 - 128,837.1) / (2-1) 128,837.1 / (95-2) F (1, 94) = (5051.2) / (1) 1385.3 F(1,94) = 3.64, p = .0579 Three Parameter Model Test of BAC F (3 – 2, 96 – 3) = (108,550.6 - 103,875.8) / (3-2) 103,875.8 / (95-3) F (1, 93) = (4674.8) / (1) 1129.1 F (1, 93) = 4.14, p = .0436 Decreased SSE(A) in three parameter model. Flip side of increased model R2
Power and SSE in 2 and 3 Parameter Models F (PA – PC, N – PA) = (SSE(C) - SSE(A)) / (PA-PC) SSE(A) / (N-PA) Impact of N and PA on power also clear Impact of sY and sXj and multicollinearity less clear in formula Connection to precision of parameter estimation less clear in formula
Interpretation of Multiple Regression Coefficients What did the value of b1 tell us in a regression model with one predictor? The change in Y associated with a one unit increase in X1. For every 1 unit increase in X1, there will be a b1 unit increase in Y. And now, bj with multiple (e.g., 2) predictors? The change in Y associated with a one unit increase in Xj controlling for all other predictors in the model. “Controlling for” means holding constant. For every 1 unit increase in Xj, there will be a bj unit increase in Y holding all other predictors constant.
Interpretation of Multiple Regression Coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 42.457 6.548 6.484 4.11e-09 *** BAC -184.092 95.894 -1.920 0.0579 . (Intercept) 19.43206 7.65908 2.537 0.0128 * BaseSTL 0.15335 0.03244 4.727 8.07e-06 *** BAC -177.12437 86.57963 -2.046 0.0436 * Why did b for BAC get smaller when TA was controlled (hint, consider the bivariate relationships between all variables)? print(cor(d),digits=2) BAC TA FPS BAC 1.000 -0.017 -0.19 TA -0.017 1.000 0.44 FPS -0.194 0.435 1.00
Interpretation of Multiple Regression Coefficients Why did b for BAC get smaller when BaseSTL was controlled (hint, consider the bivariate relationships between all variables? BAC TA FPS BAC 1.000 -0.017 -0.19 TA -0.017 1.000 0.44 FPS -0.194 0.435 1.00 When TA increases, FPS increases When BAC increases TA increases The partial effect of BAC on FPS, if TA was not allowed to increase (i.e., is controlled or held constant), is smaller than when it is allowed to increase along with BAC as expected
Interpretation of Multiple Regression Coefficients In what situation would bj for a focal predictor not change when you added an additional predictor (covariate)? If the new predictor was completely uncorrelated (orthogonal) with the focal predictor, there would be no change in the parameter estimate when you added this new predictor This is why uncorrelated predictors/covariates are considered easier to interpret. If they are related to the DV, they will increase power to test your focal variable but they will not change your estimate of the magnitude of the focal variables parameter estimate. Completely orthogonal variables are typically only observed in experimental designs.
Interpretation of Multiple Regression Coefficients Two Parameter Model Test of BAC F (2 – 1, 96 – 2) = (133,888.3 - 128,837.1) / (2-1) 128,837.1 / (95-2) F (1, 94) = (5051.2) / (1) 1385.3 F(1,94) = 3.64, p = .0579 Three Parameter Model Test of BAC F (3 – 2, 96 – 3) = (108,550.6 - 103,875.8) / (3-2) 103,875.8 / (95-3) F (1 93) = (4674.8) / (1) 1129.1 F (1 93) = 4.14, p = 0436 SSR = SSE(C) – SSE(A)
Coefficient of Determination (R2) Proportion of variance in Y explained by the set of all model predictors. (i.e., proportion of variance in Y predicted by all Xs in model). Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.43122 7.65933 2.537 0.0128 * BAC -177.04935 86.58046 -2.045 0.0437 * TA 0.15332 0.03243 4.727 8.07e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 33.42 on 93 degrees of freedom Multiple R-squared: 0.2242, Adjusted R-squared: 0.2075 F-statistic: 13.43 on 2 and 93 DF, p-value: 7.493e-06
Coefficient of Determination (R2) R2 for any augmented model is: R2 = SSE(Mean-only) - SSE(A) SSE(Mean-only) Mean-Only Model: FPSi = 0 SSE(Mean-only) = 133888.3 Augmented Model: FPSi = 0 + 1 * BACi + 2 * TAi SSE(A) = 103,875.8 R2 = 133888.3 - 128837.1 = 0.03773 133888.3 In this augmented model, R2 describes the combined effect of BAC and TA. In more complex models, R2 will always be predictive strength of the set of all predictors. ^ ^
Coefficient of Determination (R2) R2 = SSE(Mean-only) - SSE(A) SSE(Mean-only) SSE for the mean-only model is the numerator of the formula for the variance for Yi. SSE = (Yi – Y)2 SYi2 = (Yi – Y)2 N-1 SSE in general is proportional to variance ( SSE / (N-P) ). SSE for any model is proportional to the variance of the raw scores around the predicted values from that model ^
R2 and the Mean-Only Model Why did the mean-only model not have an R2? It explained no variance in Yi because it predicted the same value (mean) for every person. Of course, this clearly makes sense given the formula as well R2 = SSE(Mean-only) - SSE(Mean-only) SSE(Mean-only)
Effect Size Estimates As in the one predictor model, the parameter estimates in the two predictor model (and the k predictor model) are attractive effect size estimates. In addition, there are variance based effect size estimates that are also attractive. You have already learned about Partial eta-squared (p2), which Judd et al refer to as PRE. You will now also learn about Delta R2 (R2)
Partial Eta2 or PRE Partial Eta2 or PRE describes how much SSE was reduced (proportionally) in the augmented model where we estimated a specific parameter vs. the associated compact model where we fixed that parameter to 0. Compact model: FPSi = 0 + 0 * BACi + 2 * TAi ; SSE = 108,547.946 Augmented model: FPSi = 0 + 1 * BACi + 2 * TAi SSE(A) = 103,875.8 How much was the error reduced by estimating 1 for BAC? SSE(C) – SSE(A) = 108,547.946 - 103,875.8 = .043 SSE (C) 108,547.946 ^ ^
Delta R2 Delta R2 is the increase in model R2 for the augmented model where we estimated a specific parameter vs. the associated compact model where we fixed that parameter to 0. FPSi = 0 + 0 * BACi + 2 * TAi ^ Residual standard error: 33.98 on 94 degrees of freedom Multiple R-squared: 0.1893, Adjusted R-squared: 0.1806 F-statistic: 21.94 on 1 and 94 DF, p-value: 9.449e-06 ^ FPSi = 0 + 1 * BACi + 2 * TAi Residual standard error: 33.42 on 93 degrees of freedom Multiple R-squared: 0.2242, Adjusted R-squared: 0.2075 F-statistic: 13.43 on 2 and 93 DF, p-value: 7.493e-06 R2 = .2242 - .1893 = .0349
Delta R2 Delta R2 can also be defined with respect to SSE Compact model: FPSi = 0 + 0 * BACi + 2 * TAi ; SSE = 108,547.946 Augmented model: FPSi = 0 + 1 * BACi + 2 * TAi SSE(A) = 103,875.8 How much was the error reduced by estimating 1 for BAC? SSE(C) – SSE(A) = 108,547.946 - 103,875.8 = . 0.0349 SSE (mean-only) 133,888.28 ^ ^
Comparing Variance Indices R2 = SSE(Mean-only) – SSE(A) SSE(Mean-only) R2 = SSE(C) – SSE(A) p2= SSE(C) – SSE(A) SSE(C) R2 = SSE(C) – SSE(A) SSE(Mean-only)
Comparing Variance Indices Describes proportion of explained variance in Y explained by full model relative to total variance in Y Can NOT be used for a specific predictor Not very useful in most fields R2 = Describes proportion of unique variance in Y explained by Xj relative to total variance in Y If Xs are orthogonal R2 will sum to R2 Anchored to total Y variance Same denominator for all Xs p2= Describes proportion of reduction of unexplained variance (SSE) by adding Xj > R2 (stupid!) SPSS reports it (stupid!) Stable in experimental designs when additional IVs are added
Visualizing the Model ^ FPSi = 19.4 + -177.1 * BACi + 0.2 * TAi
Visualizing the Model > e = effect('BAC', m3) > plot(e)
Visualizing the Model e = effect('BAC*TA', m3, default.levels =3) plot(e, multiline=TRUE)
Visualizing the Model
What do you report and why??? > summary(m3) lm(formula = FPS ~ BAC + TA, data = d) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.43122 7.65933 2.537 0.0128 * BAC -177.04935 86.58046 -2.045 0.0437 * TA 0.15332 0.03243 4.727 8.07e-06 *** Residual standard error: 33.42 on 93 degrees of freedom Multiple R-squared: 0.2242, Adjusted R-squared: 0.2075 F-statistic: 13.43 on 2 and 93 DF, p-value: 7.493e-06 > lm.sumSquares(m3) SSR dR-sqr pEta-sqr (Intercept) 7188.784 0.0537 0.0647 BAC 4670.745 0.0349 0.0430 TA 24959.868 0.1864 0.1937 Error (SSE) 103877.201 Total (SST) 133888.282
What do you report and why??? > summary(m3) lm(formula = FPS ~ BAC + cTA, data = d) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.06411 5.91157 7.116 2.28e-10 *** BAC -177.04935 86.58046 -2.045 0.0437 * cTA 0.15332 0.03243 4.727 8.07e-06 *** Residual standard error: 33.42 on 93 degrees of freedom Multiple R-squared: 0.2242, Adjusted R-squared: 0.2075 F-statistic: 13.43 on 2 and 93 DF, p-value: 7.493e-06 > lm.sumSquares(m3c) SS dR-sqr pEta-sqr (Intercept) 56552.949 0.4224 0.3525 BAC 4670.745 0.0349 0.0430 cTA 24959.868 0.1864 0.1937 Error (SSE) 103877.201 Total (SST) 133888.282
Describing Model Results We regressed fear-potentiated startle (FPS) on Blood alcohol concentration (BAC). We included trait anxiety (mean-centered) as a covariate in this model to increase power to test substantive questions about BAC. We report and test partial effects, controlling for all other predictors in the model, from the full model that included both predictors. We provide raw regression coefficients, 95% confidence intervals for these coefficients, and partial eta2 (p2) to quantify effect sizes for each predictor in Table 1. FPS was 42.1 V for participants with 0.00% BAC and average trait anxiety, t(93) = 7.12, p< .001, indicating that our threat manipulation successfully increased FPS above zero when sober. As expected, the effect of the trait anxiety covariate was significant and reduced error variance by approximately 19%, t(93)= 4.73, p< .001. FPS increased by 0.2 V for every 1 V increase in trait anxiety. As predicted, the effect of BAC was significant and reduced error variance by approximately 4%, t(93)= 2.05, p= .044. FPS decreased 1.8 V for every .01% increase in BAC (see Figure 1)
Describing Model Results All results can be summarize concisely in a table. b 95% CI(b) p2 t p Intercept 42.1 (30.3, 53.8) 0.353 7.12 < .001*** Trait Anxiety 0.2 (0.1, 0.2) 0.198 4.73 BAC -177.1 (-349.1, -5.2) 0.043 2.04 .044* Notes: R2 = .224, F(2,93) = 13.44, p < .001*** Trait anxiety was mean-centered FPS= 42.1 + - 177.1*BAC + 0.2*Trait Anxiety (mean-centered)
Describing Model Results (Simultaneous) What other table should you consider in your results? Table of simple correlations between variables. Can summarize with other important info fairly concisely. Could also include reliability, skewness, kurtosis, etc FPS Trait Anxiety BAC 0.44 -0.19 -0.02 M 32.2 147.6 0.06 SD 37.5 105.7 0.04