Download presentation
Presentation is loading. Please wait.
Published byDenzel Wellmon Modified over 10 years ago
1
A Brief Introduction to Bayesian Inference Robert Van Dine 1
2
"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes." (Laplace, 1814). 2
3
A Brief History Bayes’ theorem was developed by the Revered Thomas Bayes (1702-1761), however it was not published until after his death The idea developed by Thomas Bayes was not well known until it was independently published by Pierre-Simone Laplace in 1774, who was one of the first mathematicians to apply probabilistic ideas to scientific inquiry In 1814 Laplace published A Philosophical Essay on Probabilities which developed the Bayesian interpretation of probability more thoroughly – some refer to Bayesian inference as Laplacian inference because of this 3
4
In 1939 Theory of Probability, written by Harold Jeffreys, revived interest in Bayesian inference During World War II, Alan Turing used an early form of Bayesian networks in his work to decode the Nazi Enigma machine In 1946 Richard Cox showed that the rules of Bayesian inference have a well-formulated axiomatic basis and that it is the only inferential approach that is logically consistent 4 A Brief History Continued
5
Some Definitions Prior : the distribution of the parameter that is assumed before any data is observed The prior captures our knowledge and belief about the parameter Evidence: the marginal likelihood of the data Posterior : the distribution of the parameter after taking into account the observed data 5
6
Overview of Bayesian Inference In Bayesian inference, the parameter of interest is considered a random variable rather than a fixed value The rules of probability are used to make direct inferences about the parameter Probability statements about the parameter are interpreted as “degree of belief” Bayes’ theorem is used to revise our beliefs about the parameter after getting the data – it expresses how a subjective degree of belief should rationally change to account for new evidence 6
7
Derivation of Bayes’ Theorem 7
8
Interpretation of Bayes’ Theorem 8
9
Example: Estimating a Binomial Parameter 9
10
Review : The Beta Distribution 10
11
The Beta Prior 11 Simply add the number of successes to a and the number of failures to b to obtain the posterior
12
How to Choose a Prior 12
13
Flavors of the Beta Distribution 13
14
The BernBeta Function 14 A snippet of the R code: -> The BernBeta function takes a Beta prior and data vector as arguments and produces plots of the prior, likelihood and posterior distributions
15
The BernBeta Function 15
16
Output of BernBeta 16
17
Example 2 – Ned and Fred Suppose two NBA scouts, Ned and Fred, are scouting player A and player B respectively for the NBA draft They would like to know the players’ shooting percentages on jump shots off the pick and roll – this is an ability that is very important in the NBA, but it is not an officially kept statistic Suppose further that the true (unmeasurable) abilities of player A and player B are identical – both are 45% jump shooters off the pick and roll, but because of some unknown biases Ned and Fred have very different prior beliefs 17
18
Ned and Fred – NBA Scouts Ned believes player A is a 25% shooter and Fred believes player B is a 60% shooter In reality both players are 40% shooters and they average 5 such shots per game Ned observes player A for 10 games and Fred observes player B for 10 games – using Bayesian reasoning to update their beliefs, how much will their prior beliefs affect their conclusions about the players? 18
19
The post.Demo function 19
20
Ned Fred 20
21
Inference The posterior distribution summarizes our belief about the parameter after looking at the data According to the Bayesian point of view, inferences about the parameter are drawn from the posterior distribution – they are conditional on the sample that actually occurred Frequentist inferences about the parameter involve probabilities calculated from the sampling distribution based on all possible sample that could have occurred, probabilities that are not conditional on the sample that did occur 21
22
Point Estimation 22
23
MSE Comparison 23
24
Bayesian vs. Frequentist MSE 24 Frequentist Bayesian
25
Interval Estimation The Bayesian credible interval is calculated directly from the posterior distribution It has a straightforward “degree of belief” probability interpretation – it summarizes the parameter values that could be credibly believed given the observed data Contrast with frequentist confidence intervals 25
26
Credible Intervals 26
27
Hypothesis Testing: One Sided 27
28
Hypothesis Testing: Two Sided 28
29
References Introduction to Bayesian statistics By: Bolstad, William M. Wiley-Interscience 2004 Doing Bayesian data analysis: a tutorial with R and BUGS By: Kruschke, John K. Academic Press 2011 29
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.