Download presentation
Presentation is loading. Please wait.
Published byElizabeth Fiedler Modified over 6 years ago
1
Set Up for Instructor MGH Display: Try setting your resolution to 1024 by 768 Run Powerpoint. For most reliable start up: Start laptop & projector before connecting them together If necessary, reboot the laptop Run Firefox. Load Psych 548 webpage. Psych 548, Miyamoto, Win '17
2
Set Up for Students Turn on your computer; log in.
Open a browser to the Psych 548 website (you can get it from MyUW) Download the zip file: p548.zip . Unzip the zip file. This process will create a subdirectory of your downloads directory. The files for today’s class are in this directory or one of its subdirectories. Psych 548, Miyamoto, Win '17
3
Bayesian Hierarchical Model: Many Theta's in Multiple Conditions
Lecture was not given on Monday 2/6/2017 because of UW closure due to snow. A lecture that is largely the same as the present one will be given as lec06-2.p548.w17.pptm on Wednesday 2/8/2017. Bayesian Hierarchical Model: Many Theta's in Multiple Conditions P548: Bayesian Stats with Psych Applications Instructor: John Miyamoto 02/06/2017: Lecture 06-1 Note: This Powerpoint presentation may contain macros that I wrote to help me create the slides. The macros aren’t needed to view the slides. You can disable or delete the macros without any change to the presentation.
4
Lecture probably ends here
Outline ## Lecture probably ends here Psych 548, Miyamoto, Aut ‘16
5
Psych 548, Miyamoto, Aut ‘16
6
Next: View Progression of Models
(i) Model of Single Individual (ii) Model of Multiple Individuals within a Single Group (iii) Model of Multiple Groups, each containing Multiple Individuals Psych 548, Miyamoto, Win '17 (i) Model of Single Individual
7
Figure 8.2: Bayesian Model for Flips of a Single Coin Model of a Single Individual
q X N q ~ beta(1, 1) X ~ binomial(q, N) Kruschke, Fig. 8.2, p. 196 LW, Fig. 2.1, p. 18 Psych 548:, Miyamoto, Win '17 (ii) Model of Multiple Individuals within a Single Group
8
Figure 9.7: J coins sampled from the same mint Model of Multiple Individuals within a Single Group
k w a b qs yij s = 1, ..., J i = 1, ..., n Kruschke Fig. 9.7 UW Psych 548, Miyamoto, Win '17 (iii) Model of Multiple Groups, each containing Multiple Individual
9
Figure 9.13 (ed.2): J coins sampled from C mints Model of Multiple Groups, each containing Multiple Individuals w k A B c = 1, 2, 3 wc kc ac bc s = 1, ..., J qs i = 1, ..., n yij Kruschke Fig p. 252 UW Psych 548, Miyamoto, Win '17 Progression of Models
10
Progression of Models: (i) Model of Individual; (ii) Model of Multiple Individuals; (iii) Model of Multiple Groups of Individuals (iii) Model of Groups of Individuals kc wc ac bc qs yij s = 1, ..., J i = 1, ..., n c = 1, 2, 3 w k A B (ii) Model of Multiple Individuals k w a b qs yij s = 1, ..., J i = 1, ..., n (i) Model of One Individual q X N UW Psych 548, Miyamoto, Win '17 Simplified Version of the Model of Groups of individuals
11
Simplified Version, LW Notation. Figure 9. 13 (ed
Simplified Version, LW Notation. Figure 9.13 (ed.2): J coins sampled from N mints w k w k A B c = 1, 2, 3 c = 1, 2, 3 wc kc wc kc s = 1, ..., J ac bc qs s = 1, ..., J qs i = 1, ..., n yij i = 1, ..., n yij UW Psych 548, Miyamoto, Win '17 Correspondence between Model File and Bayesian Graphical Model
12
Correspondence Between Model Syntax & Graphical Model
# Hyperpriors for omega0 and kappa0, the parameters of # the hyperdistribution from which omega[cc] and kappa[cc] # are sampled. omegaO ~ dbeta( 1.0, 1.0 ) kappaO <- kappaMinusTwoO + 2 kappaMinusTwoO ~ dgamma( 0.01, 0.01 ) # Compute the A0 and B0 parameters of the hyper beta distribution A0 <- omegaO*(kappaO - 2) + 1 B0 <- (1 - omegaO)*(kappaO - 2) + 1 # Hyperprior for omega[cc] and kappa[cc], # the condition-specific omega and kappa for ( mm in 1:Ncat ) { omega[mm] ~ dbeta( A0, B0 ) kappaMinusTwo[mm] ~ dgamma( 0.01, 0.01 ) kappa[mm] <- kappaMinusTwo[mm] + 2 } # Prior for theta[j] for ( j in 1:Nsubj ) { aSubj[j] <- omega[p[j]]*(kappa[p[j]]-2)+1 bSubj[j] <- (1-omega[p[j]])*(kappa[p[j]]-2)+1 theta[j] ~ dbeta( aSubj[j], bSubj[j] ) # Likelihood function for ( i in 1:Nsubj ) { z[i] ~ dbin( theta[i], N[i] ) } #close bracket for the model syntax w k A B c = 1, 2, 3 wc kc Amu and Bmu are not represented as random variables in the diagram on the right because they are treated as constants in this model. They are not sampled from a hyperhyperdistribution. Skappa & Rkappa are not represented as random variables in the diagram on the right because they are treated as constants in this model. Specifically, Skappa = pow(10,2)/pow(10,2) = 1, Rkappa = 10/pow(10,2). Note that kappa is a random variable that is sampled from a gamma( Skappa, Rkappa ) distribution. ac bc s = 1, ..., J qs i = 1, ..., n yij UW Psych 548, Miyamoto, Win '17
13
Psych 548, Miyamoto, Aut ‘16
14
Basic Principles of Bayesian Estimation
Once you have computed the posterior distribution over the parameters of a statistical model, it is easy to compute the posterior distribution of any parameter that is a function of the model parameters. Example If you have computed the posterior distribution of the mode ω and size (concentration) κ of a beta distribution, then you can easily compute the posterior distribution of the mean μ of the beta distribution. Psych 548, Miyamoto, Win '17 #
15
Basic Principles of Bayesian Estimation
Once you have computed the posterior distribution over the parameters of a statistical model, it is easy to compute the posterior distribution of any parameter that is a function of the model parameters. Bayesian estimates do not require any special theory of multiple comparisons. Example If you have computed the posterior distribution of the means, μ1, μ2, ..., μn, of multiple groups, then the posterior distribution of any comparison: μ1 - μ2, μ1 - μ3, , μn-2 - μn, μn-1 - μn μ1 / μ2, μ1 / μ3, , μn-2 / μn, μn-1 / μn (μ1 - μ2) / (μ3 - μ4), , etc. can be interpreted directly without any adjustments for multiple comparisons. Psych 548, Miyamoto, Win '17 #
16
Basic Principles of Bayesian Estimation
Once you have computed the posterior distribution over the parameters of a statistical model, it is easy to compute the posterior distribution of any parameter that is a function of the model parameters. Bayesian estimates do not require any special theory of multiple comparisons. Example If you have computed the posterior distribution of the means, μ1, μ2, ..., μn, of multiple groups, then the posterior distribution of any comparison: μ1 - μ2, μ1 - μ3, , μn-2 - μn, μn-1 - μn μ1 / μ2, μ1 / μ3, , μn-2 / μn, μn-1 / μn (μ1 - μ2) / (μ3 - μ4), , etc. can be interpreted directly without any adjustments for multiple comparisons. Psych 548, Miyamoto, Win '17 #
17
Basic Principles of Bayesian Estimation
Once you have computed the posterior distribution over the parameters of a statistical model, it is easy to compute the posterior distribution of any parameter that is a function of the model parameters. Bayesian estimates do not require any special theory of multiple comparisons. Psych 548, Miyamoto, Win '17 #
18
Region of Practical Equivalence (ROPE)
Kruschke Chapter 12 (p. 335): "A region of practical equivalence (ROPE) indicates a small range of parameter values that are considered to be practically equivalent to the null value for purposes of the particular application [For example,] if we are assessing the efficacy of a drug versus a placebo, we might only consider using the drug if it improves the probability of cure by at least 5 percentage points. Thus, the ROPE on the difference of cure probabilities could have limits of ± 0.05." Psych 548, Miyamoto, Win '17
19
Psych 548, Miyamoto, Aut ‘16
20
Set Up for Instructor Turn off your cell phone. Close web browsers if they are not needed. Classroom Support Services (CSS), 35 Kane Hall, If the display is odd, try setting your resolution to 1024 by 768 Run Powerpoint. For most reliable start up: Start laptop & projector before connecting them together If necessary, reboot the laptop Psych 548, Miyamoto, Aut ‘16
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.