Presentation is loading. Please wait.

Presentation is loading. Please wait.

Systematics in Hfitter. Reminder: profiling nuisance parameters Likelihood ratio is the most powerful discriminant between 2 hypotheses What if the hypotheses.

Similar presentations


Presentation on theme: "Systematics in Hfitter. Reminder: profiling nuisance parameters Likelihood ratio is the most powerful discriminant between 2 hypotheses What if the hypotheses."— Presentation transcript:

1 Systematics in Hfitter

2 Reminder: profiling nuisance parameters Likelihood ratio is the most powerful discriminant between 2 hypotheses What if the hypotheses depend on additional (“nuisance”) parameters ? – e.g. the background slope . -> We “profile them away” : -> Wilks theorem: q  ~  1 2 Note: 1) It’s a  2 2) Important here: it’s independent of  ; profiling removes the nuisance parameters from the problem!

3 Systematics Nuisance parameters come in 2 kinds – “Good”: parameters constrained by the fit. The data tells us what their values are e.g. background slope – “Bad” : not constrained Need to introduce an outside constraint, which is added “by hand” to the likelihood. These is what we call “Systematics” normally… e.g. width of the signal peak  CB – Could let it float in the fit, but no sensitivity (until 2013 ?) – Can measure Z->ee, apply the result to  : provides a constraint Technically (on the  CB example) – Can write  CB =  CB 0 (1 +  ),  energy resolution –  is introduced since it is easily constrained : it should be close to 0 (if  CB 0 computed with all corrections applied) how far from 0 is a measure of the uncertainty (say 10% ?). How to implement this precisely ?

4 Bayesian/Frequentist Hybrid treatment 2 ways of dealing with the constraint in practice. First the Bayesian way, because it is more intuitive and more widespread. Idea:assume  is distributed according to some PDF. – Obvious choice :  ~ Gaussian(0, 10%). –  is free, but the there is a penalty in the likelihood for a being different from 0. Toys: – Each toy dataset must be thrown using a random value of , drawn from the PDF. – Running over many toys effectively integrates out . Problem –  is a model parameter. Giving it a PDF is Bayesian – the PDF gives our “degree of belief” of where  should be. => Not directly linked to something measured. (also, why a Gaussian ?)

5 Hfitter example (hfitter_Mgg_noCats_hybrid.dat) [Dependents] mgg = L(100 - 150) // gamma-gamma invariant mass [Models] component Signal = HggSigMggPdfBuilder() norm=(muSignal*nSignalSM; muSignal) component Background = HggBkgMggPdfBuilder() [Constraints] constraint dSig = RooGaussian("dSig", “mean_dSig", "sigma_dSig") [Parameters] nSignalSM = 1.225 C L(0 - 50000) muSignal = 1 L(-1000 - 10000) nBackground = 99 L(0 - 100000) [Signal] formula cbSigma = (cbSigma0*(1 + dSig)) dSig = 0 L(-1 - 10) mean_dSig = 0 C sigma_dSig = 0.10 C … The constraint on dSig : a Gaussian with specified parameters Constraint PDF parameters specified here (also could be in [Parameters]) The cbSigma parameter is now given by a formula involving dSig dSig defined here. Allowed range is the important part. value really means starting value in the fit

6 “Frequentist” way Idea: – like other nuisance parameters,  can be constrained in some way. – Problem here is that the constraint comes from another measurement e.g. we could include a Z->ee sample in our model and fit everything simultaneously, getting  as a regular NP. But too complex.. – Solution: include the result from that other experiment Use directly L(data’ |  ) ? Too complex… “Executive summary” : PDF(  mes |  ). e.g.  mes ~ G( , 10%). – Add this as a penalty term in the likelihood Differences with Hybrid case –  mes is a fixed measured value (if everything calibrated correctly, =0) – Note that  is now a PDF parameter. No PDF on  ! (G gives a likelihood for  ) Similarities –  is still floating in the fit. Constraint still comes from penalty term Note also that in this Gaussian case, L is the same as previously… but not always the case! Toys: – There is a PDF on  mes, so it should be randomized when generating toys – However,  mes only appears in the penalty term: all toys are in fact the same –  is just a parameter, it is not generated in the toys – Where does the smearing come in ? when fitting the toy,  is constrained by the value of  mes.

7 Hfitter example (hfitter_Mgg_noCats_syst.dat) [Dependents] mgg = L(100 - 150) // gamma-gamma invariant mass [Models] component Signal = HggSigMggPdfBuilder() norm=(muSignal*nSignalSM; muSignal) component Background = HggBkgMggPdfBuilder() [Constraints] constraint dSig_aux = RooGaussian("dSig_aux", “dSig", "sigma_dSig") [Parameters] nSignalSM = 1.225 C L(0 - 50000) muSignal = 1 L(-1000 - 10000) nBackground = 99 L(0 - 100000) [Signal] formula cbSigma = (cbSigma0*(1 + dSig)) dSig = 0 L(-1 - 10) dSig_aux = 0 C sigma_dSig = 0.10 C … Constraint now on dSig_aux auxiliary measurement. dSig now a PDF parameter Now dSig_aux is constant (but can be randomized when generating toys). The cbSigma parameter is defined as previously dSig defined here, same as previously

8 Some results (2010 numbers w/smearing) Bayesian constraints on dSig, dEff, Gaussian with 10% width Frequentist constraints on dSig, dEff, Gaussian with 10% width

9 Some distributions

10 The way the constraint works Bayesian case : –  0 = 0, but toys thrown with  gen != 0 => in the fit,  drawn towards  gen Frequentist case : –  mes randomized in toys => in the fit, a drawn towards  mes Everything the same in this case, not true in distributions where  and  0 don’t play symmetric roles (e.g. Log-normal)

11 Results with 2011 numbers, Lognormal Bayesian constraints on dSig, dEff, Lognormal with 10% width Frequentist constraints on dSig, dEff, Lognormal with 10% width

12


Download ppt "Systematics in Hfitter. Reminder: profiling nuisance parameters Likelihood ratio is the most powerful discriminant between 2 hypotheses What if the hypotheses."

Similar presentations


Ads by Google