Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Bayesian statistics Yves Moreau. Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori.

Similar presentations


Presentation on theme: "Introduction to Bayesian statistics Yves Moreau. Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori."— Presentation transcript:

1 Introduction to Bayesian statistics Yves Moreau

2 Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori Bayesian inference Multinomial en Dirichlet distributions Estimation of frequency matrices Pseudocounts Dirichlet mixture

3 The Cox-Jaynes axioms and Bayes’ rule

4 Probability vs. belief What is a probability? Frequentist point of view Probabilities are what frequency counts (coin, die) and histograms (height of people) Such definitions are somewhat circular because of the dependency on the Central Limit Theorem Measure theory point of view Probabilities satisfy Kolmogorov’s  -algebra axioms Rigorous definition fits well within measure and integration theory But definition is ad hoc to fit within this framework

5 Bayesian point of view Probabilities are models of the uncertainty regarding propositions within a given domain Induction vs. deduction Deduction IF ( A  B AND A = TRUE ) THEN B = TRUE Induction IF ( A  B AND B = TRUE ) THAN A becomes more plausible Probabilities satisfy Bayes’ rule

6 The Cox-Jaynes axioms The Cox-Jaynes axioms allow the buildup of a large probabilistic framework with minimal assumptions Firstly, some concepts A is a proposition A TRUE or FALSE D is a domain Information available about the current situation BELIEF:  ( A= TRUE | D ) Belief that we have regarding the proposition given the domain knowledge

7 Secondly, some assumptions 1.Suppose we can compare beliefs  (A| D ) >  (B| D )  A is more plausible than B given D and suppose the comparison is transitive We have an ordering relation, so  is a number

8 2.Suppose there exists a fixed relation between the belief in a proposition and the belief in the negation of this proposition 3.Suppose there exists a fixed relation between on the one hand the belief in the union of two propositions and on the other hand the belief in the first proposition and the belief in the second proposition given the first one

9 Bayes’ rule THEN it can be shown (after rescaling of the beliefs) that Bayes’ rule If we accept the Cox-Jaynes axions, we can always apply Bayes’ rule, independently of the specific definition of the probabilities

10 Bayes’ rule Bayes’ rule will be our main tool for building probabilistic models and to estimate them Bayes’ rule holds not only for statements (TRUE/FALSE) but for any random variables (discrete or continuous) Bayes’ rule holds for specific realizations of the random variables as well as for the whole distribution

11 Importance of the domain D The domain D is a flexible concept that encapsulates the background information that is relevant for the problem It is important to set up the problem within the right domain Example Diagnosis of Tay-Sachs’ disease Rare disease that appears more frequently for Ashkenazi Jews With the same symptoms, the probability of the disease will be smaller if we are in a hospital in Brussels that if we are in Mount Sinai Hospital in New York If we try to build a model with all the patients in the world, this model will not be more efficient

12 Probabilistic models and inference

13 Probabilistic models We have a domain D We have observations D We have a model M with parameters  Example 1 Domain D : the genome of a given organism Data D : a DNA sequence S = ’ACCTGATCACCCT’ Model M : the sequences are generated by a discrete distribution over the alphabet {A,C,G,T} Parameters  :

14 Example 2 Domain D : all European people Data D : the length of people from a given group Model M : the length is normally distributed N(m,  ) Parameters  : the mean m and the standard deviation 

15 Generative models It is often possible to set up a model of the likelihood of the data For example, for the DNA sequence More sophisticated models are possible HMMs Gibbs sampling for motif finding Bayesian networks We want to find the model that describes our observations

16 Maximum likelihood Maximum likelihood (ML) Consistent: if the observation were generated by the model M with parameters  *, then  ML will converge to  * when the number of observations goes to infinity Note that the data might not be generated by any instance of the model If the data set is small, there might be a large difference between  ML en  *

17 Maximum a posteriori probability Maximum a posteriori probability (MAP) Bayes’ rule Thus posterior likelihood of the data prior a priori knowledge plays no role in optimization over 

18 Posterior mean estimate

19 Distributions over parameters Let us look carefully to P(  |M) (or to P(  |D,M) ) P(  |M) is a probability distribution over the PARAMETERS We have to handle both distributions over observations and over parameters at the same time Example Distribution of the length of people P(D| ,M) Prior P(  |M) Length 150 175 200 Mean length 150 175 200 Standard deviation length 5 10 15

20 Bayesian inference If we want to update the probability of the parameters with new observations D 1.Choose a reasonable prior 2.Add the information from the data 3.Get the updated distributions of the parameters (We often work with logarithms) 1 3 2

21 Bayesian inference Example Mean length 150 175 200 Mean length 150 175 200 Mean length 150 175 200 100 Belgian men 100 Dutch men

22 Marginalization A major technique for working with probabilistic models is to introduce or remove a variable through marginalization wherever appropriate If a variable Y can take only k mutually exclusive outcomes, we have If the variables are continuous

23 Multinomial and Dirichlet distributions

24 Multinomial distribution Discrete distribution K independent outcomes with probabilities  i Example Die K=6 DNA sequence K=4 Amino acid sequence K=20 For K=2 we have a Bernoulli variable (giving rise to a binomial distribution)

25 The multinomial distribution gives the number of times that the different outcomes were observed The multinomial distribution is the natural distribution for the modeling of biological sequences

26 Dirichlet distribution Distribution over the region of the parameter space where The distribution has parameters The Dirichlet distribution gives the probability of  The distribution is like a ‘dice factory’

27

28 Dirichlet distribution Z(  ) is a normalization factor such that  is de gamma function Generalization of the factorial function to real numbers The Dirichlet distribution is the natural prior for sequence analysis because this distribution is conjugate to the multinomial distribution, which means that if we have a Dirichlet prior and we update this prior with multinomial observations, the posterior will also have the form of a Dirichlet distribution Computationally very attractive

29

30 Estimation of frequency matrices Estimation on the basis of counts e.g., Position-Specific Scoring Matrix in PSI-BLAST Example: matrix model of a local motif GACGTG CTCGAG CGCGTG AACGTG CACGTG Count the number of instances in each column

31 If there are many aligned sites (N>>), we can estimate the frequencies as This is the maximum likelihood estimate for 

32 Proof We want to show that This is equivalent to Further

33 Pseudocounts If we have a limited number of counts, the maximum likelihood estimate will not be reliable (e.g., for symbols not observed in the data) In such a situation, we can combine the observations with prior knowledge Suppose we use a Dirichlet prior  : Let us compute the Bayesian update

34 Bayesian update =1 because both distributions are normalized Computation of the posterior mean estimate Normalization integral Z(.)

35 Pseudocounts The prior contributes to the estimation through pseudo- observations If few observations are available, then the prior plays an important role If many observations are available, then the pseudocounts play a negligible role

36 Dirichlet mixture Sometimes the observations are generated by a heterogeneous process (e.g., hydrophobic vs. hydrophilic domains in proteins) In such situations, we should use different priors in function of the context But we do not necessarily know the context beforehand A possibility is the use of a Dirichlet mixture The frequency parameter  can be generated from m different sources S with different Dirichlet parameters  k

37 Dirichlet mixture Posterior Via Bayes’ rule

38 Dirichlet mixture Posterior mean estimate The different components of the Dirichlet mixture are first considered as separate pseudocounts These components are then combined with a weight depending on the likelihood of the Dirichlet component

39 Summary The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori Bayesian inference Multinomial and Dirichlet distributions Estimation of frequency matrices Pseudocounts Dirichlet mixture


Download ppt "Introduction to Bayesian statistics Yves Moreau. Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori."

Similar presentations


Ads by Google