I’ll not discuss about Bayesian or Frequentist foundation (discussed by previous speakers) I’ll try to discuss some facts and my point of view. I think.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

The Maximum Likelihood Method
Measurements of the angles of the Unitarity Triangle at B A B AR Measurements of the angles of the Unitarity Triangle at B A B AR PHENO06 Madison,15-18.
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
27 th March CERN Higgs searches: CL s W. J. Murray RAL.
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
The Estimation Problem How would we select parameters in the limiting case where we had ALL the data? k → l  l’ k→ l’ Intuitively, the actual frequencies.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #22.
AP Statistics – Chapter 9 Test Review
Bayesian statistics – MCMC techniques
Class Handout #3 (Sections 1.8, 1.9)
Copyright ©2010 Pearson Education, Inc. publishing as Prentice Hall 9- 1 Basic Marketing Research: Using Microsoft Excel Data Analysis, 3 rd edition Alvin.
Probability theory Much inspired by the presentation of Kren and Samuelsson.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
Inference about a Mean Part II
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Copyright © 2010 Pearson Education, Inc. Chapter 19 Confidence Intervals for Proportions.
Chi Square Distribution (c2) and Least Squares Fitting
Sensitivity of searches for new signals and its optimization
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 7 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Problem solving in project management
Statistical Analysis of Systematic Errors and Small Signals Reinhard Schwienhorst University of Minnesota 10/26/99.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Additional Slides on Bayesian Statistics for STA 101 Prof. Jerry Reiter Fall 2008.
1 As we have seen in section 4 conditional probability density functions are useful to update the information about an event based on the knowledge about.
,1,1,1,1 ,2,2,2,2 ,3,3,3,3 On behalf of M. Bona, G. Eigen, R. Itoh On behalf of M. Bona, G. Eigen, R. Itoh and E. Kou.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Statistics for Data Miners: Part I (continued) S.T. Balke.
G. Cowan 2009 CERN Summer Student Lectures on Statistics1 Introduction to Statistics − Day 4 Lecture 1 Probability Random variables, probability densities,
+ Chapter 12: Inference for Regression Inference for Linear Regression.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Determination of Unitarity Triangle parameters Achille Stocchi LAL-Orsay Phenomenology Workshop on Heavy Flavours Ringberg Schloss 28 April – 2 May 2003.
CP violation measurements with the ATLAS detector E. Kneringer – University of Innsbruck on behalf of the ATLAS collaboration BEACH2012, Wichita, USA “Determination.
Practical Statistics for Particle Physicists Lecture 3 Harrison B. Prosper Florida State University European School of High-Energy Physics Anjou, France.
Bayesian vs. frequentist inference frequentist: 1) Deductive hypothesis testing of Popper--ruling out alternative explanations Falsification: can prove.
Statistics In HEP Helge VossHadron Collider Physics Summer School June 8-17, 2011― Statistics in HEP 1 How do we understand/interpret our measurements.
Frequentistic approaches in physics: Fisher, Neyman-Pearson and beyond Alessandro Palma Dottorato in Fisica XXII ciclo Corso di Probabilità e Incertezza.
Pavel Krokovny, KEK Measurement of      1 Measurements of  3  Introduction Search for B +  D (*)0 CP K +  3 and r B from B +  D 0 K + Dalitz.
Confidence Intervals First ICFA Instrumentation School/Workshop At Morelia, Mexico, November 18-29, 2002 Harrison B. Prosper Florida State University.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Ch15: Decision Theory & Bayesian Inference 15.1: INTRO: We are back to some theoretical statistics: 1.Decision Theory –Make decisions in the presence of.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
NON-LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 6 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
G. Cowan Lectures on Statistical Data Analysis Lecture 8 page 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem 2Random variables and.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
ES 07 These slides can be found at optimized for Windows)
G. Cowan Lectures on Statistical Data Analysis Lecture 4 page 1 Lecture 4 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
Andrzej Bożek for Belle Coll. I NSTITUTE OF N UCLEAR P HYSICS, K RAKOW ICHEP Beijing 2004  3 and sin(2  1 +  3 ) at Belle  3 and sin(2  1 +  3 )
In Bayesian theory, a test statistics can be defined by taking the ratio of the Bayes factors for the two hypotheses: The ratio measures the probability.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Ivo van Vulpen Why does RooFit put asymmetric errors on data points ? 10 slides on a ‘too easy’ topic that I hope confuse, but do not irritate you
The asymmetric uncertainties on data points in Roofit
BAYES and FREQUENTISM: The Return of an Old Controversy
Data Mining Lecture 11.
Lecture 4 1 Probability (90 min.)
CHAPTER 29: Multiple Regression*
More about Posterior Distributions
Likelihood intervals vs. coverage intervals
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Lecture 4 1 Probability Definition, Bayes’ theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests general.
Statistics for HEP Roger Barlow Manchester University
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

I’ll not discuss about Bayesian or Frequentist foundation (discussed by previous speakers) I’ll try to discuss some facts and my point of view. I think that we cannot solve here a century old debate.. We should keep in mind that with the two approaches we are answering two different questions (see previous talks). This statement simply translates into the fact that Bayesians give PDF and Frequentists give CL.  caveat for Bayesian is the prior dependence.  Caveat for the Frequentist is how to define CL (5%,32%) and how to treat systematic (often with a bayesian approach) and of theoretical errors which often have been already combined When we do a phenomenological work we should not forget physics A phenomenological work implies - to do predictions - to indicate which are the important things to do/ to measure/ to calculate - to use all the available informations.

SM predictions of  ms SM predictions of sin2  Our collaboration or protocollaboration is in read (CKMFitter in this figure is in blue) Our collaboration is in read or protocoll. is in blue (CKMfitter in this figure is in yellow) sin2  and  m s were predicted before these mesurements.  Crucial test of the SM  Important motivation to perform this measurement

Priors are not a problem. They are part of the Bayesian approach. We check that any time. If the measurement (likelihood) is starting to be precise, there is no dependence. In this case the two methods give similar answers. In other case the question is different the answer is different. Example of depence on prior even in presence of not very precise (30%) measurement (  from Dalitz technique ) using Cartesian or Polar coordinates Past comparison Freq/Bayes Test done with same inputs in 2002 for the first CKM workshop Frequentist/ Bayesian In the today situation the two approaches would given even more similar results

Statistics should not allow people to…forget physics. Example on  How it is possible ? Our analysis of  was criticised..and not very kindly..because of some argument like the prior dependence, the dependence of the result from different parametrization + the fact that we were not able to reproduce the 8 ambiguities…+ the fact that we do not have a solution at  ~0 Recall : hep-ph/ Utfit answer Please have a look at hep-ph/ submitted to PRD

Gronau London method requires some a priori MINIMAL ASSUPTION on strong interactions, namely - flavour blindness and CP conservation - negligible isospin symm. breaking. We believe that the strong coupling constant has a natural size of  QCD ~1GeV We do not expect : 1-CL Frequentist plot from CKMFitter ??  QCD

In addition.. the Baysian result does not depend on use of different parametrizations and does not depend on priors (cut on the upper value of |P|) as claimed..

More details on the anaysis of  from M. Bona presentation at SLAC

Some very instructive slides from a seminar given by R. Faccini

The frequentistic approach returns the region of the parameters for which the data have at least a given probability (1-C.L.) of being described by the model – –The true value is a fixed number – no distribution –Utilize toy MC CKMFitter (RFit option) The bayesian approach tries to calculate the probability distribution of the true parameters by assigning probability density functions to all unknown parameters –Utilize Bayes theorem –The true value has a distribution –P(H) : a-priori probability UTFit

How to read plots B A B AR 1-C.L.=probability of the data being in worse agreement than what observed with a given hypothesis on the true value The intercepts with 1-C.L.=0.32 are the boundary of the interval of hypotheses on the true value which are discarded by the data with a prob. of at least 32% (the so called “68% C.L. region”). Prob. Density (“a-posteriori”) is an estimate of the pdf of the true value projection of red area : region by which the true value is covered at 68% probability note that the UTFit convention in case of multiple peaks is to start from maximum of likelihood and integrate over equi-probable contours Otherwise the integration starts from the median 1-C.L. “Allowed” region

What they say of each other Frequentists of bayesian approach –A priori probabilities completely arbitrary –How can I trust a calculation based on something which is completely unknown? –Why should I try to calculate something imprecisely if I can already calculate something exactly? If I can trust the Bayesian result only when it agrees with the frequentist one, why shall I try it at all !?! –The integration of the likelihood to get the C.L. regions has degrees of arbitrariness Bayesians of frequentist approach –The probability of the true value being in a C.L. region is unknown (not necessarily above C.L.) The result has no really usefulness, in principle the true value could be anywhere with unknown probability –There is some freedom in the choice of the test statistics and of the definition of “worse agreement”.