Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.

Similar presentations


Presentation on theme: "Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no."— Presentation transcript:

1 Bayes Theorem

2 Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.

3 Prior Odds, Omega The ratio of the two prior probabilities What new data would make you revise the priors?

4 Likelihood Ratio, LR If I have had too many beers, there is a 30% likelihood that I will act awfully. If I have not had too many beers, the likelihood is only 3%. The likelihood ratio is

5 Multiplication Rule of Probability Thus

6 Addition Rule of Probability Since B and Not B are mutually exclusive, Substituting this for the denominator of our previous expression,

7 Multiplication Rule Again and Now substitute the right hand expressions in our previous expression, which was

8 Bayes Theorem Yielding

9 Revising the Prior Probability You arrive at the party. Karl is behaving awfully. You revise your prior probability that Karl has had too many beers, obtaining a posterior probability.

10 A = Behaving awfully, B = had too many beers Prior Probabilities Likelihoods

11 Posterior Odds Given that Karl is behaving awfully, The probability that he has had too many beers is revised to.714. And the odds are revised from.25 to

12 Bayes Theorem Restated The posterior odds ratio = the product of the prior odds ratio and the likelihood ratio. 2.5 =.25 x 10

13 Bayesian Hypothesis Testing H  :  IQ = 100. H 1 :  IQ = 110. P(H  ) and P(H 1 ) are prior probabilities. I’ll set both equal to.5. D is the obtained data. P(D| H  ) and P(D|H 1 ) are the likelihoods. P(D| H  ) is a bit like the p value from classical hypothesis testing.

14 Compute Test Statistics D: Sample of 25 scores, M = 107. Assume  = 15, so  M = 3. Compute for each hypothesis For H , z = 2.33 For H 1, z = -1 Assume that z is normally distributed.

15 Obtain the Likelihoods and P(D) For each hypothesis, the likelihood is.5 times the probability density of z – we consider the null and the alternative equally likely. p(D|H 0 ) =.5(.0264) =.0132 p(D|H 1 ) =.5(.2420) =.1210 Notice that P(D) is the denominator of the ratio in Bayes theorem.

16 Calculate Posterior Probabilities P(H 0 |D) is what many researchers mistakenly think the traditional p value is. The traditional p is P(D|H 0 ).

17 Calculate Posterior Odds.9023/.098 = 9.21. Given our data, the alternative hypothesis is more than 9 times more likely than the null hypothesis. Is this enough to persuade you to reject the null? No? Then let us gather more data.

18 Calculate Likelihoods and P(D) For a new sample of 25, M = 106. Z = 2 under the null, prob density.0540, z = -1.33 under the alternative, pd.1647. P(D|H 0 ) is.0540/2 =.0270. P(D|H 1 ) is.1647/2 =.0824..098 and.9023 are posterior probs from previous analysis, prior probs in the new analysis.

19 Revise the Probabilities, Again With the posterior probability of the null at.0344, we are likely comfortable rejecting it.

20 Newly Revised Posterior Odds.9656/.0344= 28.1. The alternative is more than 28 times more likely than the null.

21 The Alternative Hypothesis Note that the alternative hypothesis here was exact,  = 110. How do we set it? Could be the prediction of an alternative theory. We could make it  = value most likely given the observed data (the sample mean).

22 P(H  |D) and P(D|H  ) The P(H  |D) is the probability that naïve researchers think they have when they compute a p value. What they really have is P(D or more extreme|H  ). So why don’t more researchers use Bayesian stats to get P(H  |D) ? Traditionalists are uncomfortable with the subjectivity involved in setting prior probabilities.

23 Bayesian Confidence Intervals Parameters are thought of as random variables rather than constant in value. The distribution of a random variable represents our knowledge about what its true value may be. The wider that distribution, the greater our ignorance.

24 Precision (prc) The prc is the inverse of the variance of the distribution of the parameter. Thus, the greater the prc, the more we know about the parameter. For means,, so SEM 2 = s 2 /N, and the inverse of SEM 2 is N/s 2 = precision.

25 Priors: Informative or Non-informative We may think of the prior distribution of the parameter as noninformative –All possible values being equally likely –For example, uniform distrib. From 0 to 1. –Or uniform distribution from -  to +  Or as informative –Some values more likely than others –For example, normal distribution with a certain mean.

26 Posterior Distribution of the Parameter When we receive new data, we revise the prior distribution of the parameter. We can construct a confidence interval from the posterior distribution. Example: We want to estimate .

27 Estimating  We confess absolute ignorance about the value of , but are willing to assume a normal distribution for the parameter. We sample 100 scores. M = 107, s 2 = 200. The Precision = the inverse of the squared standard error = n/s 2.

28 95% Bayesian Confidence Interval This is identical to the traditional CI.

29 New Data Become Available N = 81, M = 106, s 2 = 243 Precision = 81/243 = 1/3 = prc sample Our prior distribution, the posterior distrib. from the first analysis, had M = 107, precision = ½ The new posterior distribution will be characterized by a weighted combination of the prior distribution and the new data.

30 Revised 

31 Revised SEM 2 Revised precision = sum of prior and sample precisions. Revised SEM 2 = inverse of revised precision = 1/.8333333 = 1.2

32 Revised Confidence Interval


Download ppt "Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no."

Similar presentations


Ads by Google