Bayesian style dependent tests

Slides:



Advertisements
Similar presentations
SPH 247 Statistical Analysis of Laboratory Data 1April 2, 2013SPH 247 Statistical Analysis of Laboratory Data.
Advertisements

10-3 Inferences.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Inference for Regression
CHAPTER 24: Inference for Regression
Objectives (BPS chapter 24)
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Final Review Session.
Lecture 9: One Way ANOVA Between Subjects
Simple Linear Regression and Correlation
Simple Linear Regression Analysis
Inference for regression - Simple linear regression
Analysis of Covariance Harry R. Erwin, PhD School of Computing and Technology University of Sunderland.
BPS - 3rd Ed. Chapter 211 Inference for Regression.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Environmental Modeling Basic Testing Methods - Statistics III.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
Tutorial 5 Thursday February 14 MBP 1010 Kevin Brown.
BPS - 5th Ed. Chapter 231 Inference for Regression.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
Exploring Group Differences
CHAPTER 12 More About Regression
Inference about the slope parameter and correlation
Transforming the data Modified from:
PSY 626: Bayesian Statistics for Psychological Science
Regression and Correlation
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Chapter 12 Simple Linear Regression and Correlation
Jefferson Davis Research Analytics
CHAPTER 7 Linear Correlation & Regression Methods
Let’s continue to do a Bayesian analysis
Inferences for Regression
Inference for Regression
Mixed models and their uses in meta-analysis
Let’s do a Bayesian analysis
CHAPTER 12 More About Regression
12 Inferential Analysis.
PSY 626: Bayesian Statistics for Psychological Science
Correlation and Regression
PSY 626: Bayesian Statistics for Psychological Science
Statistical Methods For Engineers
BA 275 Quantitative Business Methods
CHAPTER 29: Multiple Regression*
PSY 626: Bayesian Statistics for Psychological Science
CHAPTER 26: Inference for Regression
Information criterion
PSY 626: Bayesian Statistics for Psychological Science
Chapter 12 Simple Linear Regression and Correlation
PSY 626: Bayesian Statistics for Psychological Science
PSY 626: Bayesian Statistics for Psychological Science
Statistical Inference about Regression
PSY 626: Bayesian Statistics for Psychological Science
PSY 626: Bayesian Statistics for Psychological Science
12 Inferential Analysis.
Let’s do a Bayesian analysis
Basic Practice of Statistics - 3rd Edition Inference for Regression
PSY 626: Bayesian Statistics for Psychological Science
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
CHAPTER 12 More About Regression
What are their purposes? What kinds?
Psych 231: Research Methods in Psychology
CHAPTER 12 More About Regression
One-Factor Experiments
Essentials of Statistics for Business and Economics (8e)
Inferences for Regression
Presentation transcript:

Bayesian style dependent tests 11/29/2018 Bayesian style dependent tests Greg Francis PSY 626: Bayesian Statistics for Psychological Science Fall 2018 Purdue University PSY200 Cognitive Psychology

Weapon priming Anderson, Benjamin, & Bartholow (1998): 32 subjects read an aggression-related word (injure, shatter) out loud as fast as possible After presentation of a weapon word (shotgun, grenade) After presentation of a neutral word (rabbit fish) There were additional trials with non-aggression-related words being read (we ignore them) Our data is the mean response time for wording reading based on whether the first (prime) word is a weapon word or a neutral word Follow along by downloading “WeaponPrime” and “WeaponPrime1.R” from the class web site

Ignore dependencies? WPdata<- read.csv(file="WeaponPrime.csv",header=TRUE,stringsAsFactors=TRUE) traditional1 <- t.test(Time ~ Prime, data=WPdata, var.equal=TRUE) print(traditional1) Two Sample t-test data: Time by Prime t = 0.54967, df = 62, p-value = 0.5845 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -18.86865 33.18115 sample estimates: mean in group Neutral mean in group Weapon 416.3438 409.1875

Standard analysis # Run traditional dependent t-test traditional2 <- t.test(Time ~ Prime, data=WPdata, paired=TRUE) print(traditional2) Note: t-value and p-value change, but sample means do not Difference is: 7.1563 Note: confidence interval of difference changes a lot Paired t-test data: Time by Prime t = 2.2239, df = 31, p-value = 0.03358 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.593243 13.719257 sample estimates: mean of the differences 7.15625

Regression analysis # Compare to traditional independent linear regression print(summary(lm(Time~Prime, data=WPdata))) Prints out a table of estimates of Intercept (Neutral prime) and deviation for Weapon prime (difference of means) Call: lm(formula = Time ~ Prime, data = WPdata) Residuals: Min 1Q Median 3Q Max -123.187 -27.555 4.734 27.195 127.656 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 416.344 9.206 45.23 <2e-16 *** PrimeWeapon -7.156 13.019 -0.55 0.585 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 52.08 on 62 degrees of freedom Multiple R-squared: 0.00485, Adjusted R-squared: -0.0112 F-statistic: 0.3021 on 1 and 62 DF, p-value: 0.5845

Regression analysis # Compare to traditional dependent linear regression library(lme4) print(lmer(Time ~ Prime + (1 | Subject), data = WPdata)) Prints out a table of estimates of Intercept (Neutral prime) and deviation for Weapon prime (difference of means) Linear mixed model fit by REML ['lmerMod'] Formula: Time ~ Prime + (1 | Subject) Data: WPdata REML criterion at convergence: 606.886 Random effects: Groups Name Std.Dev. Subject (Intercept) 50.46 Residual 12.87 Number of obs: 64, groups: Subject, 32 Fixed Effects: (Intercept) PrimeWeapon 416.344 -7.156

Bayesian variation of t-test Call: lm(formula = Time ~ Prime, data = WPdata) Residuals: Min 1Q Median 3Q Max -123.187 -27.555 4.734 27.195 127.656 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 416.344 9.206 45.23 <2e-16 *** PrimeWeapon -7.156 13.019 -0.55 0.585 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 52.08 on 62 degrees of freedom Multiple R-squared: 0.00485, Adjusted R-squared: -0.0112 F-statistic: 0.3021 on 1 and 62 DF, p-value: 0.5845 Treat data as independent library(brms) model1 = brm(Time ~ Prime, data = WPdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) Family: gaussian Links: mu = identity; sigma = identity Formula: Time ~ Prime Data: WPdata (Number of observations: 64) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Population-Level Effects: Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat Intercept 416.68 9.12 398.96 434.34 2700 1.00 PrimeWeapon -7.65 13.40 -33.82 19.07 2700 1.00 Family Specific Parameters: sigma 52.50 4.63 44.24 62.25 2495 1.00

Bayesian variation of t-test Treat data as dependent (different intercept for each subject) model2 = brm(Time ~ Prime + (1 | Subject), data = WPdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) Linear mixed model fit by REML ['lmerMod'] Formula: Time ~ Prime + (1 | Subject) Data: WPdata REML criterion at convergence: 606.886 Random effects: Groups Name Std.Dev. Subject (Intercept) 50.46 Residual 12.87 Number of obs: 64, groups: Subject, 32 Fixed Effects: (Intercept) PrimeWeapon 416.344 -7.156 Family: gaussian Links: mu = identity; sigma = identity Formula: Time ~ Prime + (1 | Subject) Data: WPdata (Number of observations: 64) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~Subject (Number of levels: 32) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 51.18 6.63 39.92 66.44 488 1.00 Population-Level Effects: Intercept 416.92 8.82 399.12 434.25 272 1.00 PrimeWeapon -7.16 3.43 -13.75 -0.45 2429 1.00 Family Specific Parameters: sigma 13.43 1.80 10.49 17.49 1612 1.00

Model comparison Does the independent or dependent model better fit the data? > loo(model1, model2) LOOIC SE model1 691.99 13.17 model2 554.79 9.50 model1 - model2 137.20 12.87 Warning messages: 1: In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL' 2: In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL' 3: In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL' 4: Found 16 observations with a pareto_k > 0.7 in model 'model2'. With this many problematic observations, it may be more appropriate to use 'kfold' with argument 'K = 10' to perform 10-fold cross-validation rather than LOO.

Model comparison Does the independent or dependent model better fit the data? Should be no surprise! kfold(model1, model2) …… KFOLDIC SE model1 691.78 12.91 model2 563.56 10.25 model1 - model2 128.22 13.42 There were 50 or more warnings (use warnings() to see the first 50)

Null model No effect of prime, different intercepts for different subjects model3 = brm(Time ~ 1 + (1 | Subject), data = WPdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) > summary(model3) Family: gaussian Links: mu = identity; sigma = identity Formula: Time ~ 1 + (1 | Subject) Data: WPdata (Number of observations: 64) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~Subject (Number of levels: 32) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 51.62 6.77 39.98 66.46 539 1.01 Population-Level Effects: Intercept 413.64 9.26 396.23 432.35 355 1.01 Family Specific Parameters: sigma 14.15 1.85 10.98 18.07 1591 1.00

Model comparison Does the null (model3) or two-mean (model2) model better fit the data? Try: loo(model2, model3) kfold(model2, model3) …… KFOLDIC SE model2 571.58 11.75 model3 574.79 13.02 model2 - model3 -3.22 10.63

Dependent ANOVA ADHD Treatment effects Children (n=24) given different dosages of drug MPH (methylphenidate) over different weeks Measured ability to delay impulsive behavior responses (wait long enough to press a key to get a “star”) Delay of Gratification task

Dependent ANOVA Standard dependent ANOVA

Regression analysis # Compare to traditional dependent linear regression library(lme4) print(summary( lmer(CorrectResponses ~ Dosage + (1 | SubjectID), data = ATdata)) ) Prints out a table of estimates of Intercept (D0 dosage) and deviation for other Dosages There is some debate about degrees of freedom and computing p-values Linear mixed model fit by REML ['lmerMod'] Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata REML criterion at convergence: 658.3 Scaled residuals: Min 1Q Median 3Q Max -2.3690 -0.5831 0.1585 0.5923 1.9393 Random effects: Groups Name Variance Std.Dev. SubjectID (Intercept) 89.56 9.464 Residual 35.89 5.991 Number of obs: 96, groups: SubjectID, 24 Fixed effects: Estimate Std. Error t value (Intercept) 39.75000 2.28635 17.386 DosageD15 -0.08333 1.72948 -0.048 DosageD30 4.58333 1.72948 2.650 DosageD60 4.95833 1.72948 2.867 Correlation of Fixed Effects: (Intr) DsgD15 DsgD30 DosageD15 -0.378 DosageD30 -0.378 0.500 DosageD60 -0.378 0.500 0.500

Bayesian regression model1 = brm(CorrectResponses ~ Dosage + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) # print out summary of model print(summary(model1)) Linear mixed model fit by REML ['lmerMod'] Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata REML criterion at convergence: 658.3 Scaled residuals: Min 1Q Median 3Q Max -2.3690 -0.5831 0.1585 0.5923 1.9393 Random effects: Groups Name Variance Std.Dev. SubjectID (Intercept) 89.56 9.464 Residual 35.89 5.991 Number of obs: 96, groups: SubjectID, 24 Fixed effects: Estimate Std. Error t value (Intercept) 39.75000 2.28635 17.386 DosageD15 -0.08333 1.72948 -0.048 DosageD30 4.58333 1.72948 2.650 DosageD60 4.95833 1.72948 2.867 Correlation of Fixed Effects: (Intr) DsgD15 DsgD30 DosageD15 -0.378 DosageD30 -0.378 0.500 DosageD60 -0.378 0.500 0.500 Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.73 1.63 7.02 13.49 1091 1.00 Population-Level Effects: Intercept 39.63 2.29 35.27 44.30 1213 1.00 DosageD15 -0.08 1.77 -3.44 3.25 2424 1.00 DosageD30 4.59 1.74 1.34 8.05 2576 1.00 DosageD60 4.93 1.76 1.48 8.46 2306 1.00 Family Specific Parameters: sigma 6.09 0.53 5.19 7.19 1977 1.00 Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat = 1).

Bayesian regression Check chains plot(model1)

Bayesian regression Plot marginals dev.new() plot(marginal_effects(model1), points = TRUE)

Bayesian regression Consider a null model (no differences between means) model2 = brm(CorrectResponses ~ 1 + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ 1 + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.67 1.68 6.86 13.49 1288 1.00 Population-Level Effects: Intercept 41.92 2.10 37.90 46.12 1181 1.00 Family Specific Parameters: sigma 6.61 0.56 5.63 7.76 2084 1.00 Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat = 1).

Bayesian regression Compare null (model2) and full (model1) models > loo(model1, model2, reloo=TRUE) No problematic observations found. Returning the original 'loo' object. 2 problematic observation(s) found. The model will be refit 2 times. Fitting model 1 out of 2 (leaving out observation 2) Start sampling Gradient evaluation took 2.7e-05 seconds 1000 transitions using 10 leapfrog steps per transition would take 0.27 seconds. Adjust your expectations accordingly! Elapsed Time: 0.365972 seconds (Warm-up) 0.257485 seconds (Sampling) 0.623457 seconds (Total) …….. LOOIC SE model1 647.94 13.74 model2 658.79 12.98 model1 - model2 -10.86 8.53 > loo(model1, model2) LOOIC SE model1 647.94 13.74 model2 659.03 13.07 model1 - model2 -11.10 8.56 Warning message: Found 2 observations with a pareto_k > 0.7 in model 'model2'. It is recommended to set 'reloo = TRUE' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 2 times to compute the ELPDs for the problematic observations directly.

Standard ANOVA Contrasts are t-tests (or F-tests, if you like)

Bayesian Regression Contrasts: Easiest to look at posteriors Easy for comparisons to D0 (Intercept) > post<-posterior_samples(model1) > > # What is the probability that mean for D15 is larger than mean for D0? > cat("Probability CorrectResponses mean D15 is larger than mean D0 = ", length(post$b_DosageD15[post$b_DosageD15 >0])/length(post$b_DosageD15) ) Probability CorrectResponses mean D15 is larger than mean D0 = 0.4833333 > cat("Probability CorrectResponses mean D30 is larger than mean D0 = ", length(post$b_DosageD30[post$b_DosageD30 >0])/length(post$b_DosageD30) ) Probability CorrectResponses mean D30 is larger than mean D0 = 0.9937037 > cat("Probability CorrectResponses mean D60 is larger than mean D0 = ", length(post$b_DosageD60[post$b_DosageD60 >0])/length(post$b_DosageD60) ) Probability CorrectResponses mean D60 is larger than mean D0 = 0.9959259

Bayesian Regression Contrasts: Easiest to look at posteriors A bit complicated for other comparisons # What is the probability that mean for D60 is larger than mean for D15? meanD15 <- post$b_Intercept + post$b_DosageD15 meanD60 <- post$b_Intercept + post$b_DosageD60 cat("Probability CorrectResponses mean D60 is larger than mean D15 = ", sum(meanD60 > meanD15)/length(meanD60) ) > Probability CorrectResponses mean D60 is larger than mean D15 = 0.9966667 meanD30 <- post$b_Intercept + post$b_DosageD30 > > cat("Probability CorrectResponses mean D60 is larger than mean D30 = ", sum(meanD60 > meanD30)/length(meanD60) ) Probability CorrectResponses mean D60 is larger than mean D30 = 0.5811111

Setting priors It is not always obvious what can be set brms has a nice function get_prior() > get_prior(CorrectResponses ~ Dosage + (1 |SubjectID), data = ATdata) prior class coef group resp dpar nlpar bound 1 b 2 b DosageD15 3 b DosageD30 4 b DosageD60 5 student_t(3, 39, 10) Intercept 6 student_t(3, 0, 10) sd 7 sd SubjectID 8 sd Intercept SubjectID 9 student_t(3, 0, 10) sigma >

Setting priors Let’s set an informative (bad) prior on the slopes model3 = brm(CorrectResponses ~ Dosage + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2, prior = c( prior(normal(10, 1), class = "b")) ) Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.73 1.63 7.02 13.49 1091 1.00 Population-Level Effects: Intercept 39.63 2.29 35.27 44.30 1213 1.00 DosageD15 -0.08 1.77 -3.44 3.25 2424 1.00 DosageD30 4.59 1.74 1.34 8.05 2576 1.00 DosageD60 4.93 1.76 1.48 8.46 2306 1.00 Family Specific Parameters: sigma 6.09 0.53 5.19 7.19 1977 1.00 Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.56 1.64 6.90 13.35 1102 1.00 Population-Level Effects: Intercept 35.03 2.10 30.85 39.18 1223 1.00 DosageD15 8.13 0.90 6.40 9.91 2179 1.00 DosageD30 9.70 0.87 8.09 11.43 2210 1.00 DosageD60 9.81 0.87 8.15 11.52 1922 1.00 Family Specific Parameters: sigma 6.90 0.61 5.83 8.19 2256 1.00

Setting priors Let’s set a crazy prior on just one slope model4 = brm(CorrectResponses ~ Dosage + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2, prior = c( prior(normal(20, 1), class = "b", coef="DosageD60")) ) Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.73 1.63 7.02 13.49 1091 1.00 Population-Level Effects: Intercept 39.63 2.29 35.27 44.30 1213 1.00 DosageD15 -0.08 1.77 -3.44 3.25 2424 1.00 DosageD30 4.59 1.74 1.34 8.05 2576 1.00 DosageD60 4.93 1.76 1.48 8.46 2306 1.00 Family Specific Parameters: sigma 6.09 0.53 5.19 7.19 1977 1.00 Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ Dosage + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.26 1.63 6.59 13.01 1553 1.00 Population-Level Effects: Intercept 33.21 2.23 28.72 37.62 1665 1.00 DosageD15 6.21 2.08 2.30 10.35 2327 1.00 DosageD30 10.90 2.05 6.94 14.93 2176 1.00 DosageD60 17.63 0.98 15.68 19.53 2448 1.00 Family Specific Parameters: sigma 8.05 0.76 6.71 9.64 2059 1.00

Bayesian regression Compare all our models Different priors are different models! > loo(model1, model2, model3, model4, reloo=TRUE) No problematic observations found. Returning the original 'loo' object. 2 problematic observation(s) found. The model will be refit 2 times. Fitting model 1 out of 2 (leaving out observation 2) Start sampling Gradient evaluation took 9.5e-05 seconds 1000 transitions using 10 leapfrog steps per transition would take 0.95 seconds. Adjust your expectations accordingly! Elapsed Time: 0.285336 seconds (Warm-up) 0.248522 seconds (Sampling) 0.533858 seconds (Total) …….. LOOIC SE model1 647.94 13.74 model2 658.78 12.98 model3 669.23 11.38 model4 698.55 13.77 model1 - model2 -10.84 8.53 model1 - model3 -21.29 9.67 model1 - model4 -50.61 12.17 model2 - model3 -10.45 12.02 model2 - model4 -39.77 17.23 model3 - model4 -29.32 9.55 > loo(model1, model2, model3, model4) LOOIC SE model1 647.94 13.74 model2 659.03 13.07 model3 669.11 11.33 model4 698.55 13.77 model1 - model2 -11.10 8.56 model1 - model3 -21.18 9.69 model1 - model4 -50.61 12.17 model2 - model3 -10.08 12.07 model2 - model4 -39.52 17.28 model3 - model4 -29.43 9.58 Warning messages: 1: Found 2 observations with a pareto_k > 0.7 in model 'model2'. It is recommended to set 'reloo = TRUE' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 2 times to compute the ELPDs for the problematic observations directly. 2: Found 1 observations with a pareto_k > 0.7 in model 'model3'. It is recommended to set 'reloo = TRUE' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 1 times to compute the ELPDs for the problematic observations directly.

Regression We may doing something fundamentally wrong here We are treating dosage as a categorical variable, but it is really a quantitative variable Maybe we should really be doing a proper regression instead of coercing it into an ANOVA design

Frequentist regression library(lme4) print(lmer(CorrectResponses ~ DosageNumber + (1 | SubjectID), data = ATdata)) Linear mixed model fit by REML ['lmerMod'] Formula: CorrectResponses ~ DosageNumber + (1 | SubjectID) Data: ATdata REML criterion at convergence: 675.6007 Random effects: Groups Name Std.Dev. SubjectID (Intercept) 9.451 Residual 6.069 Number of obs: 96, groups: SubjectID, 24 Fixed Effects: (Intercept) DosageNumber 39.64167 0.09421

Bayesian regression library(brms) model5 = brm(CorrectResponses ~ DosageNumber + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2 ) Family: gaussian Links: mu = identity; sigma = identity Formula: CorrectResponses ~ DosageNumber + (1 | SubjectID) Data: ATdata (Number of observations: 96) Samples: 3 chains, each with iter = 2000; warmup = 200; thin = 2; total post-warmup samples = 2700 Group-Level Effects: ~SubjectID (Number of levels: 24) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 9.69 1.59 7.11 13.25 1193 1.00 Population-Level Effects: Intercept 39.38 2.18 35.04 43.62 902 1.00 DosageNumber 0.09 0.03 0.04 0.15 2394 1.00 Family Specific Parameters: sigma 6.16 0.54 5.20 7.32 2010 1.00 Linear mixed model fit by REML ['lmerMod'] Formula: CorrectResponses ~ DosageNumber + (1 | SubjectID) Data: ATdata REML criterion at convergence: 675.6007 Random effects: Groups Name Std.Dev. SubjectID (Intercept) 9.451 Residual 6.069 Number of obs: 96, groups: SubjectID, 24 Fixed Effects: (Intercept) DosageNumber 39.64167 0.09421

Bayesian regression library(brms) model1 = brm(CorrectResponses ~ DosageNumber + (1 |SubjectID), data = ATdata, iter = 2000, warmup = 200, chains = 3, thin = 2 )

Bayesian ANOVA vs. regression loo(model1, model5, reloo=TRUE) I re-ran model1, so the LOO value is a bit different from what we had previously I am not sure this kind of comparison is appropriate because we are technically using different independent variables (it feels the same, though) Hardly any difference between the two models Which makes sense given the data LOOIC SE model1 648.57 13.57 model5 648.24 14.02 model1 - model5 0.33 4.50

Activity Find the possible priors for the linear regression model What would be reasonable priors? Implement them and run the model Discuss

Conclusions Dependent tests Pretty straightforward once you get the notation straight Which is really just the notation of regression Something similar to contrasts is done by looking at the posteriors No messy hypothesis testing