Bayes for Beginners Presenters: Shuman ji & Nick Todd.

Slides:



Advertisements
Similar presentations
Bayes Rule The product rule gives us two ways to factor a joint probability: Therefore, Why is this useful? –Can get diagnostic probability P(Cavity |
Advertisements

Psychology 290 Special Topics Study Course: Advanced Meta-analysis April 7, 2014.
Last Time (Sampling &) Estimation Confidence Intervals Started Hypothesis Testing.
Bayes for beginners Methods for dummies 27 February 2013 Claire Berna
Intro to Bayesian Learning Exercise Solutions Ata Kaban The University of Birmingham 2005.
Processing physical evidence discovering, recognizing and examining it; collecting, recording and identifying it; packaging, conveying and storing it;
Introduction to Bayesian Statistics Harry R. Erwin, PhD School of Computing and Technology University of Sunderland.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. 6.1 Chapter Six Probability.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Probability Notation Review Prior (unconditional) probability is before evidence is obtained, after is posterior or conditional probability P(A) – Prior.
Probability (cont.). Assigning Probabilities A probability is a value between 0 and 1 and is written either as a fraction or as a proportion. For the.
Statistics: Unlocking the Power of Data Lock 5 STAT 101 Dr. Kari Lock Morgan Bayesian Inference SECTION 11.1, 11.2 Bayes rule (11.2) Bayesian inference.
Review: Probability Random variables, events Axioms of probability
Estimating the Transfer Function from Neuronal Activity to BOLD Maria Joao Rosa SPM Homecoming 2008 Wellcome Trust Centre for Neuroimaging.
Probability, Bayes’ Theorem and the Monty Hall Problem
Probability and inference General probability rules IPS chapter 4.5 © 2006 W.H. Freeman and Company.
Tests of significance & hypothesis testing Dr. Omar Al Jadaan Assistant Professor – Computer Science & Mathematics.
Essentials of survival analysis How to practice evidence based oncology European School of Oncology July 2004 Antwerp, Belgium Dr. Iztok Hozo Professor.
Graziella Quattrocchi & Louise Marshall Methods for Dummies 2014
Bayes for Beginners Methods for Dummies FIL, UCL, Caroline Catmur, Psychology Department, UCL & Robert Adam, ICN, UCL 31 st October 2007.
Inference for a Single Population Proportion (p).
Statistical Decision Theory
Lecture 7 Introduction to Hypothesis Testing. Lecture Goals After completing this lecture, you should be able to: Formulate null and alternative hypotheses.
Methods for Dummies 2009 Bayes for Beginners Georgina Torbet & Raphael Kaplan.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
Probability and inference General probability rules IPS chapter 4.5 © 2006 W.H. Freeman and Company.
Statistics: Unlocking the Power of Data Lock 5 STAT 101 Dr. Kari Lock Morgan 12/4/12 Bayesian Inference SECTION 11.1, 11.2 More probability rules (11.1)
Probability and inference General probability rules IPS chapter 4.5 © 2006 W.H. Freeman and Company.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
Previous Lecture: Data types and Representations in Molecular Biology.
HYPOTHESIS TESTING. Statistical Methods Estimation Hypothesis Testing Inferential Statistics Descriptive Statistics Statistical Methods.
Mathematics topic handout: Conditional probability & Bayes Theorem Dr Andrew French. PAGE 1www.eclecticon.info Conditional Probability.
Bayesian vs. frequentist inference frequentist: 1) Deductive hypothesis testing of Popper--ruling out alternative explanations Falsification: can prove.
Large sample CI for μ Small sample CI for μ Large sample CI for p
DCM – the theory. Bayseian inference DCM examples Choosing the best model Group analysis.
Bayesian Inference, Review 4/25/12 Frequentist inference Bayesian inference Review The Bayesian Heresy (pdf)pdf Professor Kari Lock Morgan Duke University.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Lecture: Forensic Evidence and Probability Characteristics of evidence Class characteristics Individual characteristics  features that place the item.
Populations III: evidence, uncertainty, and decisions Bio 415/615.
BIOSTAT 3 Three tradition views of probabilities: Classical approach: make certain assumptions (such as equally likely, independence) about situation.
Review: Probability Random variables, events Axioms of probability Atomic events Joint and marginal probability distributions Conditional probability distributions.
Copyright © 2011 Pearson Education, Inc. Putting Statistics to Work.
12/7/20151 Math b Conditional Probability, Independency, Bayes Theorem.
Some Rules of Probability. More formulae: P(B|A) = = Thus, P(B|A) is not the same as P(A|B). P(A  B) = P(A|B)·P(B) P(A  B) = P(B|A)·P(A) CONDITIONAL.
Copyright © 2005 Pearson Education, Inc. Slide 6-1.
Testing Hypotheses about a Population Proportion Lecture 31 Sections 9.1 – 9.3 Wed, Mar 22, 2006.
Textbook Basics of an Expert System: – “Expert systems: Design and Development,” by: John Durkin, 1994, Chapters 1-4. Uncertainty (Probability, Certainty.
Understanding Basic Statistics Fourth Edition By Brase and Brase Prepared by: Lynn Smith Gloucester County College Chapter Nine Hypothesis Testing.
STATISTICS 6.0 Conditional Probabilities “Conditional Probabilities”
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
Conditional Probability If two events are not mutually exclusive, the fact that we know that B has happened will have an effect on the probability of A.
Bayes for Beginners Anne-Catherine Huys M. Berk Mirza Methods for Dummies 20 th January 2016.
HL2 Math - Santowski Lesson 93 – Bayes’ Theorem. Bayes’ Theorem  Main theorem: Suppose we know We would like to use this information to find if possible.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Review Day 2 May 4 th Probability Events are independent if the outcome of one event does not influence the outcome of any other event Events are.
MM207 Statistics Welcome to the Unit 9 Seminar With Ms. Hannahs Final Project is due Tuesday, August 7 at 11:59 pm ET. No late projects will be accepted.
Inference for a Single Population Proportion (p)
Chapter Nine Hypothesis Testing.
Bayesian inference, Naïve Bayes model
The binomial applied: absolute and relative risks, chi-square
Bayes for Beginners Stephanie Azzopardi & Hrvoje Stojic
Statistical Inference
Statistical NLP: Lecture 4
Lecture: Forensic Evidence and Probability Characteristics of evidence
Hypothesis Test for Independent Groups Two Proportions
Wellcome Trust Centre for Neuroimaging
Bayes for Beginners Luca Chech and Jolanda Malamud
Probabilistic Reasoning With Bayes’ Rule
Probabilistic Reasoning With Bayes’ Rule
Presentation transcript:

Bayes for Beginners Presenters: Shuman ji & Nick Todd

Statistic Formulations. P(A): probability of event A occurring P(A|B): probability of A occurring given B occurred P(B|A): probability of B occurring given A occurred P(A,B): probability of A and B occurring simultaneously (joint probability of A and B) Joint probability of A and B P(A,B) = P(A|B)*P(B) = P(B|A)*P(A)

Bayes Rule True Bayesians actually consider conditional probabilities as more basic than joint probabilities. It is easy to define P(A|B) without reference to the joint probability P(A,B). To see this note that we can rearrange the conditional probability formula to get: P(A|B) P(B) = P(A,B) by symmetry: P(B|A) P(A) = P(A,B) It follows that: which is the so-called Bayes Rule. Thus, Bayes Rule is a simple mathematical formula used for calculating conditional probabilities

Bayesian Reasoning ASSUMPTIONS P(A) =1% of women aged forty who participate in a routine screening have breast cancer P(B|A)=80% of women with breast cancer will get positive tests 9.6% of women without breast cancer will also get positive tests EVIDENCE A woman in this age group had a positive test in a routine screening PROBLEM What’s the probability that she has breast cancer? = proportion of cancer patients with positive results, within the group of All patients with positive results ---  P(A|B) P(B)= proportion of all patients with positive results

Bayesian Reasoning ASSUMPTIONS 100 out of 10,000 women aged forty who participate in a routine screening have breast cancer 80 of every 100 women with breast cancer will get positive tests 950 out of 9,900 women without breast cancer will also get positive tests PROBLEM If 10,000 women in this age group undergo a routine screening, about what fraction of women with positive tests will actually have breast cancer?

Bayesian Reasoning Before the screening: 100 out of 10000women with breast cancer 9,900 out of 10000women without breast cancer After the screening: A = 80 out of 10000women with breast cancer and positive test B = 20 out of women with breast cancer and negative test C = 950 out of women without breast cancer and positive test D = 8,950 out of 10000women without breast cancer and negative test All patients that has positive test result = A+C Proportion of cancer patients with positive results, within the group of ALL patients with positive results: A/(A+C) = 80/(80+950) = 80/1030 = = 7.8%

Bayesian Reasoning Prior Probabilities: 100/10,000 = 1/100 = 1% = p(A) 9,900/10,000 = 99/100 = 99% = p(~A) Conditional Probabilities: A = 80/10,000 = (80/100)*(1/100) = p(B|A)*p(A) = B = 20/10,000 = (20/100)*(1/100) = p(~B|A)*p(A) = C = 950/10,000 = (9.6/100)*(99/100) = p(B|~A)*p(~A) = D = 8,950/10,000 = (90.4/100)*(99/100) = p(~B|~A) *p(~A) = Rate of cancer patients with positive results, within the group of ALL patients with positive results: P(A|B) = P(B|A) * P(A)/ P(B) = 0.008/( ) = 0.008/0.103 = = 7.8%

Another example Suppose that we are interested in diagnosing cancer in patients who visit a chest clinic: Let A represent the event "Person has cancer" Let B represent the event "Person is a smoker" We know the probability of the prior event P(A)=0.1 on the basis of past data (10% of patients entering the clinic turn out to have cancer). We want to compute the probability of the posterior event P(A|B). It is difficult to find this out directly. However, we are likely to know P(B) by considering the percentage of patients who smoke – suppose P(B)=0.5. We are also likely to know P(B|A) by checking from our record the proportion of smokers among those diagnosed. Suppose P(B|A)=0.8. We can now use Bayes' rule to compute: P(A|B) = (0.8 * 0.1)/0.5 = 0.16 Thus we found the proportion of cancer patients who are smokers.

Bayes in Brain Imaging Extension to distributions: posterior prior likelihood marginal probability

Bayes in Brain Imaging Extension to distributions: is a set of parameters defining a model posterior distribution prior distribution likelihood marginal probability

Bayes in Brain Imaging posterior distribution prior distribution likelihood Over the first 6 months of the year, 4 infections per 10,000 bed-days: Likelihood distribution Data from other hospitals show 5 to 17 infections per 10,000 bed-days: Prior distribution Product of the distributions: Posterior distribution *Spiegelhalter and Rice * Example: How many infections should a hospital expect over 40,000 bed-days? Answer is calculated using the posterior distribution Number of infections for 40,000 bed-days

Bayes in Brain Imaging Classical Approach: Null hypothesis is that no activation occurred Statistics performed, e.g. a T- statistic Reject null hypothesis if data is sufficiently unlikely This approach gives the likelihood of getting the data, given no activation. Cannot accept the null hypothesis that no activation has occurred. Bayesian Approach: Use the likelihood and prior distributions to construct the posterior distribution Compute probability that activation exceeds some threshold directly from the posterior distribution. This approach gives the probability distribution of the activation given the data. Can determine activation / no activation. Can compare models of the data.

= + X β ε y Observed Signal/Data = Experimental Matrix x Parameter Estimates(prior) + Error (Artifact, Random Noise) Bayes Example 1 The GLM: Estimating Beta’s

Bayes Example 1 The GLM: Estimating Beta’s

Bayes Example 1 The GLM: Estimating Beta’s

Example 2: Accepting/Rejecting Null Hypothesis Bayes Example 2 *Dr. Joerg Magerkurth Activated Not Activated

Segmentation and Normalization Bayes Example 3 Tissue probability maps of gray and white matter used as priors in the segmentation algorithm. Probability of expected zooms and shears used as prior in normalization algorithm.

Thanks to Will Penny and Previous MfD Presentations Questions?