Download presentation
Presentation is loading. Please wait.
1
Bayesian inference calculate the model parameters that produce a distribution that gives the observed data the greatest probability
2
Thomas Bayes Bayesian methods were invented in the 18 th century, but their application in phylogenetics dates from 1996. Thomas Bayes? (1701?-1761?)
3
Bayes’ theorem Bayes’ theorema links a conditional probability to its inverse Prob(H|D) = Prob(H) Prob(D|H) ∑ H Prob(H) Prob(D|H)
4
Bayes’ theorem in the case of two alternative hypotheses, the theorem can be written as Prob(H|D) = Prob(H) Prob(D|H) ∑ H Prob(H) Prob(D|H) Prob(H 1 |D) = Prob(H 1 ) Prob(D|H 1 ) Prob(H 1 ) Prob(D|H 1 ) + Prob(H 2 ) Prob(D|H 2 )
5
Bayes’ theorem Bayes for smarties m m m m = D H 1 =D came from mainly orange bag H 2 =D came from mainly blue bag Prob(D|H 1 ) = ¾ ¾ ¾ ¾ ¼ 5 = 405/1024 Prob(D|H 2 ) = ¼ ¼ ¼ ¼ ¾ 5 = 15/1024 Prob(H 1 ) = ½ Prob(H 2 ) = ½ Prob(H 1 |D) = Prob(H 1 ) Prob(D|H 1 ) Prob(H 1 ) Prob(D|H 1 ) + Prob(H 2 ) Prob(D|H 2 ) = = 0.964 ½ 405/1024 ½ 405/1024 + ½ 15/1024 m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m
6
Bayes’ theorem a-priori knowledge can affect one’s conclusions positive test resultnegative test result illtrue positivefalse negative healthyfalse positivetrue negative positive test resultnegative test result ill99%1% healthy0.1%99.9% using the data only, P(ill|positive test result)≈0.99
7
Bayes’ theorem a-priori knowledge can affect one’s conclusions positive test resultnegative test result illtrue positivefalse negative healthyfalse positivetrue negative positive test resultnegative test result ill99%1% healthy0.1%99.9% using the data only, P(ill|positive test result)≈0.99
8
Bayes’ theorem a-priori knowledge can affect one’s conclusions positive test resultnegative test result ill99%1% healthy0.1%99.9% a-priori knowledge: 0.1% of the population (n=100 000) is ill positive test resultnegative test result Ill (100)991 Healthy (99 900)10099800 with a-priori knowledge: 99/190 of persons with positive test results is ill P(ill|positive result) ≈ 50%
9
Bayes’ theorem a-priori knowledge can affect one’s conclusions
10
Bayes’ theorem a-priori knowledge can affect one’s conclusions
11
Bayes’ theorem a-priori knowledge can affect one’s conclusions Behind door 1Behind door 2Behind door 3Result if staying at door 1 Result if switching to door offered CarGoat CarGoat CarGoat Car Goat CarGoatCar
12
Bayes’ theorem a-priori knowledge can affect one’s conclusions P(C=c|H=h, S=s) = P(H=h|C=c, S=s) P(C=c|S=s) P(H=h|S=s) C=number of the door hiding the car S=number of the door selected by the player H=number of the door opened by the host probability of finding the car, after the original selection and the host’s opening of one.
13
Bayes’ theorem a-priori knowledge can affect one’s conclusions P(C=c|H=h, S=s) = P(H=h|C=c, S=s) P(C=c|S=s) ∑ P(H=h|C=c,S=s) C=number of the door hiding the car S=number of the door selected by the player H=number of the door opened by the host the host’s behaviour depends on the candidate’s selection and on where the car is. C=1 3
14
Bayes’ theorem a-priori knowledge can affect one’s conclusions P(C=2|H=3, S=1) = 1 1/3 C=number of the door hiding the car S=number of the door selected by the player H=number of the door opened by the host 1/2 1/3 + 1 1/3 + 0 1/3 = 2/3
15
Bayes’ theorem Bayes’ theorema is used to combine a prior probability with the likelihood to produce a posterior probability. Prob(H|D) = Prob(H) Prob(D|H) ∑ H Prob(H) Prob(D|H) prior probability posterior probability likelihood normalizing constant
16
Bayesian inference of trees in BI, the players are the tree topology and branch lengths, the evolution model and the (sequence) data) tree topology and branch lengths evolutionary model (sequence) data
17
Bayesian inference of trees the posterior probability of a tree is calculated from the prior and the likelihood Prob(, | ) = Prob(, ) Prob( |, ) Prob( ) posterior probability of a tree prior probability of a tree summation over all possible branch lengths and model parameter values likelihood
18
Bayesian inference of trees the prior probability of a tree is often not known and therefore all trees are considered equally probable A B C D E A B D C E A B E D C A C B D E B C A D E A D C B E A D B C E A D E B C A C D B E D C A B E A E C B D A E B C D A E B D C A C E B D E C A B E 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15 1 15
19
Bayesian inference of trees Prob(Tree i) Prob(Data |Tree i) Prob(Tree i |Data) prior probability likelihood posterior probability the prior probability of a tree is often not known and therefore all trees are considered equally probable
20
Bayesian inference of trees but prior knowledge of taxonomy could suggest other prior probabilities A B C D E A B D C E A B E D C A C B D E B C A D E A D C B E A D B C E A D E B C A C D B E D C A B E A E C B D A E B C D A E B D C A C E B D E C A B E 1313 1313 1313 0 0 000 00000 00 (CDE) constrained:
21
Bayesian inference of trees BI requires summation over all possible trees … which is impossible to do analytically Prob(, | ) = Prob(, ) Prob( |, ) Prob( ) summation over all possible branch lengths and model parameter values
22
1.Start at a random point Bayesian inference of trees but Markov chain Monte Carlo allows approximating posterior probability Posterior probability density tree 1 tree 2 tree 3 parameter space
23
1.Start at a random point 2.Make a small random move 3.Calculate posterior density ratio r = new/old state Bayesian inference of trees but Markov chain Monte Carlo allows approximating posterior probability Posterior probability density tree 1 tree 2 tree 3 parameter space 1 2
24
1.Start at a random point 2.Make a small random move 3.Calculate posterior density ratio r = new/old state 4.If r > 1 always accept move Bayesian inference of trees but Markov chain Monte Carlo allows approximating posterior probability Posterior probability density tree 1 tree 2 tree 3 parameter space 1 2 always accepted
25
1.Start at a random point 2.Make a small random move 3.Calculate posterior density ratio r = new/old state 4.If r > 1 always accept move If r < 1 accept move with a probability ~ 1/distance Bayesian inference of trees but Markov chain Monte Carlo allows approximating posterior probability Posterior probability density tree 1 tree 2 tree 3 parameter space 1 2 perhaps accepted
26
1.Start at a random point 2.Make a small random move 3.Calculate posterior density ratio r = new/old state 4.If r > 1 always accept move If r < 1 accept move with a probability ~ 1/distance Bayesian inference of trees but Markov chain Monte Carlo allows approximating posterior probability Posterior probability density tree 1 tree 2 tree 3 parameter space 1 2 rarely accepted
27
1.Start at a random point 2.Make a small random move 3.Calculate posterior density ratio r = new/old state 4.If r > 1 always accept move If r < 1 accept move with a probability ~ 1/distance 5.Go to step 2 Bayesian inference of trees the proportion of time that MCMC spends in a particular parameter region is an estimate of that region’s posterior probability. Posterior probability density tree 1 tree 2 tree 3 parameter space 20%48%32%
28
Bayesian inference of trees Metropolis-coupled Markov Chain Monte Carlo speeds up the search cold chain hot chain: P(tree|data) hotter chain: P(tree|data) hottest chain: P(tree|data) 0 < cold chainflat
29
Bayesian inference of trees Metropolis-coupled Markov Chain Monte Carlo speeds up the search cold scout stuck on local optimum Hey! Over here! hot scout signalling better spot
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.