Bayesian II Spring 2010
Major Issues in Phylogenetic BI Have we reached convergence? If so, do we have a large enough sample of the posterior?
Have we reached convergence? Look at the trace plots of the posterior probability (not reliable) Convergence Diagnostics: – Average Standard Deviation of the Split frequencies: compares the node frequencies between the independent runs (the closer to zero, the better, but it is not clear how small it should be) – Potential Scale Reduction Factor (PSRF): the closer to one, the better
Convergence Diagnostics By default performs two independent analyses starting from different random trees (mcmc nruns=2) Average standard deviation of clade frequencies calculated and presented during the run (mcmc mcmcdiagn=yes diagnfreq=1000) and written to file (.mcmc) Standard deviation of each clade frequency and potential scale reduction for branch lengths calculated with sumt Potential scale reduction calculated for all substitution model parameters with sump
Have we reached convergence? PSRF (sump command) Model parameter summaries over all 3 runs sampled in files "Ligia16SCOI28SGulfnoADLGGTRGbygene.nex.run1.p", "Ligia16SCOI28SGulfnoADLGGTRGbygene.nex.run2.p" etc: (Summaries are based on a total of samples from 3 runs) (Each run produced samples of which samples were included) 95% Cred. Interval Parameter Mean Variance Lower Upper Median PSRF * TL{all} r(A C){1} r(A G){1} r(A T){1} r(C G){1} r(C T){1} r(G T){1} pi(A){1} pi(C){1} pi(G){1} pi(T){1} alpha{1} * Convergence diagnostic (PSRF = Potential scale reduction factor [Gelman and Rubin, 1992], uncorrected) should approach 1 as runs converge. The values may be unreliable if you have a small number of samples. PSRF should only be used as a rough guide to convergence since all the assumptions that allow one to interpret it as a scale reduction factor are not met in the phylogenetic context.
Copyright restrictions may apply. Nylander, J. A.A. et al. Bioinformatics : ; doi: /bioinformatics/btm388 Are We There Yet (AWTY)?
Empirical Data: two independent runs 300,000,000 generations: complex model with three partitions (by codon): the bad news Plotted in Tracer
Empirical Data: two independent runs 300,000,000 generations: complex model with three partitions (by codon): the good news; splits were highly correlated between the two runs Plotted in AWTY:
Empirical Data: two independent runs 300,000,000 generations: complex model with three partitions (by codon): the good news; splits were highly correlated between the two runs Plotted in AWTY: What caused the difference in posterior probabilities? Estimation of particular parameters
Yes we have reached convergence: Do we have a large enough sample of the posterior? Long runs are better than short one, but how long? Good mixing: “Examine the acceptance rates of the proposal mechanisms used in your analysis (output at the end of the run) The Metropolis proposals used by MrBayes work best when their acceptance rate is neither too low nor too high. A rough guide is to try to get them within the range of 10 % to 70 %”
Acceptance Rates Analysis used seconds of CPU time on processor 0 Likelihood of best state for "cold" chain of run 1 was Likelihood of best state for "cold" chain of run 2 was Likelihood of best state for "cold" chain of run 3 was Likelihood of best state for "cold" chain of run 4 was Acceptance rates for the moves in the "cold" chain of run 1: With prob. Chain accepted changes to % param. 1 (revmat) with Dirichlet proposal % param. 2 (revmat) with Dirichlet proposal % param. 3 (revmat) with Dirichlet proposal % param. 4 (state frequencies) with Dirichlet proposal % param. 5 (state frequencies) with Dirichlet proposal % param. 6 (state frequencies) with Dirichlet proposal % param. 7 (gamma shape) with multiplier % param. 8 (gamma shape) with multiplier % param. 9 (gamma shape) with multiplier % param. 10 (rate multiplier) with Dirichlet proposal % param. 11 (topology and branch lengths) with extending TBR % param. 11 (topology and branch lengths) with LOCAL
tree 1tree 2 tree 3 Posterior probability distribution Parameter space Posterior probability
cold chain heated chain Metropolis-coupled Markov chain Monte Carlo a. k. a. MCMCMC a. k. a. (MC) 3
cold chain hot chain
cold chain hot chain
cold chain hot chain
unsuccessful swap cold chain hot chain
cold chain hot chain
cold chain hot chain
cold chain hot chain successful swap
cold chain hot chain
cold chain hot chain
cold chain hot chain successful swap
cold chain hot chain
Improving Convergence (Only if convergence diagnostics indicate problem!) Change tuning parameters of proposals to bring acceptance rate into the range 10 % to 70 % Propose changes to ‘difficult’ parameters more often Use different proposal mechanisms Change heating temperature to bring acceptance rate of swaps between adjacent chains into the range 10 % to 70 %. Run the chain longer Increase the number of heated chains Make the model more realistic
Sampled value Target distribution Too modest proposals Acceptance rate too high Poor mixing Too bold proposals Acceptance rate too low Poor mixing Moderately bold proposals Acceptance rate intermediate Good mixing