Download presentation
Presentation is loading. Please wait.
Published byGeorge Charles Modified over 6 years ago
1
PSY 626: Bayesian Statistics for Psychological Science
5/8/2018 Another example Greg Francis PSY 626: Bayesian Statistics for Psychological Science Fall 2016 Purdue University PSY200 Cognitive Psychology
2
Facial feedback Is your emotional state influenced by your facial muscles? If I ask you to smile, you may report feeling happier But this could just be because you guess I want you to report feeling happier Or because intentional smiling is associated with feeling happier You can ask people to use smiling muscles without them realizing it
3
Facial feedback Within subjects design (n=21)
Judge “happiness” in a piece of abstract art while Holding a pen in your teeth (smiling) Holding a pen in your lips (frowning/pouting) No pen 11 trials for each condition Different art on each trial
4
Facial feedback Within subjects design (n=21)
Judge “happiness” in a piece of abstract art while Holding a pen in your teeth (smiling) Holding a pen in your lips (frowning/pouting) No pen 11 trials for each condition Different art on each trial
5
Facial feedback Within subjects design (n=21)
Judge “happiness” in a piece of abstract art while Holding a pen in your teeth (smiling) Holding a pen in your lips (frowning/pouting) No pen 11 trials for each condition Different art on each trial
6
Data The HappinessRating is a number between 0 (no happiness) to 100 (lots of happiness) The facial feedback hypothesis suggests that the mean HappinessRating values should be larger when the pen is held in the teeth and lower when the pen is held in the lips File FacialFeedback.csv contains all the data
7
Loading data # load full data file
FFdata<-read.csv(file="FacialFeedback.csv",header=TRUE,stringsAsFactors=FALSE) # load the rethinking library library(rethinking) # Set up dummy variables FFdata$PenInTeeth <- ifelse(FFdata$Condition =="PenInTeeth", 1, 0) FFdata$NoPen <- ifelse(FFdata$Condition =="NoPen", 1, 0) FFdata$PenInLips <- ifelse(FFdata$Condition =="PenInLips", 1, 0)
8
Set up model Dummy variables set up different values for mu (to be a1, a2, or a3) FFmodel1 <- map( alist( HappinessRating ~ dnorm(mu, sigma), mu <- a1*PenInTeeth + a2*NoPen + a3*PenInLips, a1 ~ dnorm(50, 100), a2 ~ dnorm(50, 100), a3 ~ dnorm(50, 100), sigma ~ dunif(0, 50) ), data= FFdata )
9
Model Results Maximum a posteriori (MAP) model fit Formula:
HappinessRating ~ dnorm(mu, sigma) mu <- a1 * PenInTeeth + a2 * NoPen + a3 * PenInLips a1 ~ dnorm(50, 100) a2 ~ dnorm(50, 100) a3 ~ dnorm(50, 100) sigma ~ dunif(0, 50) MAP values: a a a3 sigma Log-likelihood:
10
Alternative model set up
# load full data file FFdata<-read.csv(file="FacialFeedback.csv",header=TRUE,stringsAsFactors=FALSE) # load the rethinking library library(rethinking) # Dummy variable to indicate condition FFdata$ConditionIndex <-0*FFdata$Trial FFdata$ConditionIndex[FFdata$Condition =="PenInTeeth"] =1 FFdata$ConditionIndex[FFdata$Condition =="NoPen"] =2 FFdata$ConditionIndex[FFdata$Condition =="PenInLips"] =3
11
Alternative model set up
FFmodel2 <- map( alist( HappinessRating ~ dnorm(mu, sigma), mu <- a[ConditionIndex], a[ConditionIndex] ~ dnorm(50, 100), sigma ~ dunif(0, 50) ), data= FFdata ) Mathematically, this is the same as the previous model, but we use ConditionIndex to automatically indicate different a[ ] values (means) for the different conditions Each a[ ] value has the same prior applied to it
12
Alternative model set up
Maximum a posteriori (MAP) model fit Formula: HappinessRating ~ dnorm(mu, sigma) mu <- a[ConditionIndex] a[ConditionIndex] ~ dnorm(50, 100) sigma ~ dunif(0, 50) MAP values: a[1] a[2] a[3] sigma Log-likelihood: Previous (equivalent model) a a a3 sigma
13
Prediction of means 89% HPDI derived from the posterior distributions
# Define a sequence of ConditionIndex to compute predicted means Condition.seq <-seq(from=1, to=3, by=1) # use link to compute mu for each sample from posterior and for each value in Condition.seq mu<-link(FFmodel1, data=data.frame(ConditionIndex=Condition.seq)) mu.mean <- apply(mu, 2, mean) mu.HPDI <-apply(mu, 2, HPDI, prob=0.89) # Plot the MAP line (same as abline done previously from the linear regression coefficients) dev.new() plot(HappinessRating ~ConditionIndex, data=FFdata, xlab="Condition", xaxt="n") axis(side=1, at=c(1, 2, 3), labels=c("Pen In Teeth", "No Pen", "Pen In Lips")) lines(Condition.seq, mu.mean, col=col.alpha("green",1)) shade(mu.HPDI, Condition.seq, col=col.alpha("green",0.4))
14
Prediction of ratings # generate many sample HappinessRatings scores for Condition.seq using the model sim.Happiness <- sim(FFmodel2, data=list(ConditionIndex =Condition.seq)) # Idenitfy limits of middle 89% of samples values for each NumberDistractors (PI is a function from the rethinking library) Happiness.PI <- apply(sim.Happiness, 2, PI, prob=0.89) # Plot with shading for prediction uncertainty of individual scores dev.new() plot(HappinessRating ~ConditionIndex, data=FFdata, xlab="Condition", xaxt="n") axis(side=1, at=c(1, 2, 3), labels=c("Pen In Teeth", "No Pen", "Pen In Lips")) lines(Condition.seq, mu.mean) # MAP line for means shade(mu.HPDI, Condition.seq) # shaded HPDI for estimates of means shade(Happiness.PI, Condition.seq) # shaded prediction interval for simulated Happiness values
15
Null model FFmodel <- map(
alist( HappinessRating ~ dnorm(mu, sigma), mu <- a, a ~ dnorm(50, 50), sigma ~ dunif(0, 100) ), data= FFdata )
16
Null model Maximum a posteriori (MAP) model fit Formula:
HappinessRating ~ dnorm(mu, sigma) mu <- a a ~ dnorm(50, 50) sigma ~ dunif(0, 100) MAP values: a sigma Log-likelihood:
17
More flexible model Different means and sd’s for each condition
FFmodel3 <- map( alist( HappinessRating ~ dnorm(mu, sigma), mu <- a[ConditionIndex], a[ConditionIndex] ~ dnorm(50, 50), sigma[ConditionIndex] ~ dunif(0, 100) ), data= FFdata )
18
More flexible model Maximum a posteriori (MAP) model fit Formula:
HappinessRating ~ dnorm(mu, sigma) mu <- a[ConditionIndex] a[ConditionIndex] ~ dnorm(50, 50) sigma[ConditionIndex] ~ dunif(0, 100) MAP values: a[1] a[2] a[3] sigma[1] sigma[2] sigma[3] Log-likelihood:
19
Comparing models plot(coeftab(FFmodel, FFmodel2, FFmodel3))
20
Comparing models print(compare(FFmodel, FFmodel2, FFmodel3))
WAIC pWAIC dWAIC weight SE dSE FFmodel NA FFmodel FFmodel Null model (FFmodel) and different means with same sd (FFmodel2) are nearly the same
21
Model uncertainty When you think about predicting means, you should really consider not just a single model, but the uncertainty across the models The Akaike weights estimate this uncertainty Take the prediction for each model and weight it by the Akaike weight FF.ensemble <- ensemble(FFmodel, FFmodel2, FFmodel3, data=data.frame(ConditionIndex=Condition.seq)) dev.new() plot(HappinessRating ~ConditionIndex, data=FFdata, xlab="Condition", xaxt="n") axis(side=1, at=c(1, 2, 3), labels=c("Pen In Teeth", "No Pen", "Pen In Lips")) lines(Condition.seq, mu.mean) # MAP line for means shade(mu.HPDI, Condition.seq) # shaded HPDI for estimates of means shade(Happiness.PI, Condition.seq) # shaped prediction interval for simulated Happiness values mu.PI <- apply(FF.ensemble$link, 2, PI) shade(mu.PI, Condition.seq, col=col.alpha("green",0.2))
22
What does it all mean? Relative to the variability in the HappinessRatings, the differences in means are pretty small There’s not much difference between the null model and a model with different means With equal sd, they are expected do about equally well at predicting future data What do you do? Gather more data Develop a better model Give up Use what you’ve got
23
What are the limitations?
How would you improve these models?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.