I have 6 events (Nch>=100) on a background of ? Lipari predicts the atms nu background = 11.02 (no normalization) What do I need at this point? 1) Range of # of background events predicted in my final sample. 2) Signal efficiency (based on detector).
peak energy of atmospheric before energy cut My atmospheric neutrinos come from a very uncertain part of the CR spectrum peak energy of atmospheric before energy cut 103 GeV = 1 TeV peak energy of atmospheric after energy cut 104 GeV = 10 TeV
my atmospheric neutrinos come from this part of the CR spectrum Neutrino energy is roughly a factor of 10 less than the CR primary energy
Method 1: Estimate a % error on the atmospheric neutrino background (conventional + prompt) that is dependent on Energy. Log_10 (E_nu) CR uncertainty Flux Class output uncert. Total Error 1.0 10% 15% 18% 2.0 10% 15% 18% 3.0 15% 15% 21% 4.0 35% 25% 43% 4.5 67% 5.0-? 95% First four are errors in quadrature Estimated from the spread in prompt neutrino fluxes (see slide 7) Estimated from spread in measurements of CR flux (see next slide) Estimated from AtmoFlux class outputs (see slide after next - 6)
HERE YOU CAN SEE THE DIFFERENCES BETWEEN THE BARTOL AND HONDA PROTON FLUXES short dashed green line = old HKKM pink solid line = HKKM 2004 dashed green line = BARTOL 2004 taken from HKKM 2004 paper
Bartol 2004 and Honda 2004 are 15% different at 103 GeV = 1 TeV Consider the Red Line (ratio of Honda neutrino flux to Bartol neutrino flux). Differences in the cosmic ray fits and the hadronic interaction model both contribute to the difference seen in the outputs of the AtmoFlux class. Consider the red line only. (blue line still needs to be checked with higher statistics) 15% 25% plot made by Teresa Montaruli Bartol 2004 and Honda 2004 are 15% different at 103 GeV = 1 TeV 25% different at 104 GeV = 10 TeV
Values for error in neutrino flux (conv Values for error in neutrino flux (conv. + prompt) were taken from a plot like this.
I applied the % error to every atmospheric neutrino event based on the True Energy of the neutrino recorded for each. This gives 6 atmospheric nu background predictions. Bartol max Bartol central Bartol min Honda max Honda central Honda min
Number of Atms nu Events Best Nch Cut #Past Best Cut Nch>=100 Nch<100 Bartol max 98 16.5 15.3 714.7 Bartol central 101 8.6 9.1 533.8 Bartol min 91 4.6 2.9 352.8 Honda max 101 10.2 10.7 559.2 Honda central 101 6.1 6.4 419.8 Honda min 91 3.4 2.1 459 DATA 6 459 WHERE I’M STUCK: I could take 2.1 to 15.3 as the range of the atmospheric neutrino background. HOWEVER, the low energy data must somehow constrain the range and I am unclear how to approach that right now.
Determining signal efficiency Use the signal from the 2003 sub-sample used to test OM sensitivity Every OM has its sensitivity set at 70%, 100% and 130% (Instead of thinking of it as sensitivity, it is better to think of it as some overall detector sensitivity or detector acceptance.) For 2003 signal, 70% acceptance 15.7 events 100% acceptance 20.1 events 130% acceptance 25.8 events Downward acceptance leads to ~22% error. Upward acceptance leads to ~28% error. Assume 28% error in both directions and apply to the 4-yr analysis. 68.4 signal events were predicted. This gives a range of 49.2 to 87.6 events.
Method 2: Use the data to constrain the acceptable Monte Carlo solutions. Use Bartol and Honda models. Change the overall normalization of the background by slightly modifying the atmospheric neutrino spectrum. Modify the detector acceptance (sensitivity) by +- 30%. I used linear fits to interpolate values for acceptance levels between 70% and 130%. (This was all done on files from 2003 only.)
Change the spectral index of the neutrinos. Reweight events with: (Trueen[2] )-0.0X Region of Interest E spectrum log10E In my region of interest, the lines are nearly parallel. Hence, changing the spectral index acts as a change in overall normalization. This is because my events are very high energy and I am pivoting about a very low energy point.
Bartol Consider low energy data: Nch < 100 For 2003: Data = 186 events. The 1 sigma range is 172 to 200 events. Normalization Bartol Detector Acceptance Choose combos which cause the atmospheric neutrino prediction to fall within 1 sigma of the measured data value.
All of these combinations satisfy the condition that the number of low energy atmospheric neutrinos predicted falls within 1 sigma of the measured data value. Bartol
Take the “colored map” for Bartol and Honda and find all of the scenarios which fit the condition that the number of low energy events predicted falls within 1 sigma of the data. For each scenario in the band, find the number of events predicted for Nch >= 100. Honda flux: events with greater than 100 OMs triggered Honda
For 2003 Bartol, predictions (Nch>=100) based on the “best fit” scenarios from the band: high: 3.71 events low: 2.94 events average: 3.34 events For 2003 Honda, predictions (Nch>=100) based on the “best fit” scenarios from the band: high: 3.70 events low: 3.05 events average: 3.35 events Average from both Bartol and Honda: 3.32 events This means that the max and min are about 12% off of the average for 2003 only (not 4-year).
Assume a 12% error in the number of atmospheric neutrinos predicted in the 4-year sample. atms nu (Nch>=100) prediction (no error) 112% 88% Lipari 11.02 12.34 9.69 Bartol 9.1 10.19 8.01 Honda 6.4 7.17 5.63 Range of background for 4-years: 5.6 to 12.3 (this is more constrained than 2.1 to 15.3 which is the range for Method 1) HOWEVER, how do you get a signal efficiency?
Assume a 12% error in the number of atmospheric neutrinos predicted in the 4-year sample. atms nu (Nch>=100) prediction (no error) 112% 88% Lipari 11.02 12.34 9.69 Bartol 9.1 10.19 8.01 Honda 6.4 7.17 5.63 Range of background for 4-years: 5.6 to 12.3 (this is more constrained than 2.1 to 15.3 which is the range for Method 1) HOWEVER, how do you get a signal efficiency?
Could get a signal efficiency in the way I described before Could get a signal efficiency in the way I described before. (Take the % error seen in the 2003 files and apply this to the 4-year sample.) Gary seems to not like this option, but hasn’t had any good alternative suggestions as of yet. I suggested that maybe we need to look back at the “normalization factor” used to bring the nusim events into agreement with the data. At present, we have abandoned that procedure. But maybe we need to look at that again and see what the normalization factor would be for the inverted analysis. I’ve talked to Paolo a little about this and we’ve figured out (based on an old Dima paper) how to reweight my atmospheric neutrino events as if they were generated by Corsika. I’m not really sure where I’m going with this at the moment, but we can discuss…