Two Interpretations of What it Means to Normalize the Low Energy Monte Carlo Events to the Low Energy Data Atms MC Atms MC Data Data Signal Signal Apply.

Slides:



Advertisements
Similar presentations
Update on diffuse extraterrestrial neutrino flux search with 2000 AMANDA-II data Jessica Hodges, Gary Hill, Jodi Cooley This version of the presentation.
Advertisements

London Collaboration Meeting September 29, 2005 Search for a Diffuse Flux of Muon Neutrinos using AMANDA-II Data from Jessica Hodges University.
Fermilab, June 26 th 2001 Thoughts on the fitting procedure for the  c + lifetime with the        channel Gianluigi Boca.
Sean Grullon For the IceCube Collaboration Searching for High Energy Diffuse Astrophysical Neutrinos with IceCube TeV Particle Astrophysics 2009 Stanford.
Statistical Analysis of Systematic Errors and Small Signals Reinhard Schwienhorst University of Minnesota 10/26/99.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Atmospheric Neutrino Oscillations in Soudan 2
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
Overview course in Statistics (usually given in 26h, but now in 2h)  introduction of basic concepts of probability  concepts of parameter estimation.
The AIE Monte Carlo Tool The AIE Monte Carlo tool is an Excel spreadsheet and a set of supporting macros. It is the main tool used in AIE analysis of a.
W  eν The W->eν analysis is a phi uniformity calibration, and only yields relative calibration constants. This means that all of the α’s in a given eta.
AP STATISTICS LESSON 10 – 1 (DAY 2)
Irakli Chakaberia Final Examination April 28, 2014.
General Confidence Intervals Section Starter A shipment of engine pistons are supposed to have diameters which vary according to N(4 in,
Correlation and Prediction Error The amount of prediction error is associated with the strength of the correlation between X and Y.
Astrophysics working group - CERN March, 2004 Point source searches, Aart Heijboer 1 Point Source Searches with ANTARES Outline: reconstruction news event.
Optimising Cuts for HLT George Talbot Supervisor: Stewart Martin-Haugh.
Sensitivity to New Physics using Atmospheric Neutrinos and AMANDA-II John Kelley UW-Madison Penn State Collaboration Meeting State College, PA June 2006.
Measurement of the Atmospheric Muon Neutrino Energy Spectrum with IceCube in the 79- and 86-String Configuration Tim Ruhe, Mathis Börner, Florian Scheriau,
A statistical test for point source searches - Aart Heijboer - AWG - Cern june 2002 A statistical test for point source searches Aart Heijboer contents:
Point Source Search with 2007 & 2008 data Claudio Bogazzi AWG videconference 03 / 09 / 2010.
11/23/2015Slide 1 Using a combination of tables and plots from SPSS plus spreadsheets from Excel, we will show the linkage between correlation and linear.
Measurement of the atmospheric lepton energy spectra with AMANDA-II presented by Jan Lünemann* for Kirsten Münich* for the IceCube collaboration * University.
Background Subtraction and Likelihood Method of Analysis: First Attempt Jose Benitez 6/26/2006.
Search for Electron Neutrino Appearance in MINOS Mhair Orchanian California Institute of Technology On behalf of the MINOS Collaboration DPF 2011 Meeting.
Study of neutrino oscillations with ANTARES J. Brunner.
Study of neutrino oscillations with ANTARES J. Brunner.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24.
Study of pair-produced doubly charged Higgs bosons with a four muon final state at the CMS detector (CMS NOTE 2006/081, Authors : T.Rommerskirchen and.
A bin-free Extended Maximum Likelihood Fit + Feldman-Cousins error analysis Peter Litchfield  A bin free Extended Maximum Likelihood method of fitting.
DIJET STATUS Kazim Gumus 30 Aug Our signal is spread over many bins, and the background varies widely over the bins. If we were to simply sum up.
Quality Control  Statistical Process Control (SPC)
Class 5 Estimating  Confidence Intervals. Estimation of  Imagine that we do not know what  is, so we would like to estimate it. In order to get a point.
2005 Unbinned Point Source Analysis Update Jim Braun IceCube Fall 2006 Collaboration Meeting.
Update on Rolling Cascade Search Brennan Hughey UW-Madison
1 Measurement of the Mass of the Top Quark in Dilepton Channels at DØ Jeff Temple University of Arizona for the DØ collaboration DPF 2006.
G. Cowan Lectures on Statistical Data Analysis Lecture 12 page 1 Statistical Data Analysis: Lecture 12 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
September 10, 2002M. Fechner1 Energy reconstruction in quasi elastic events unfolding physics and detector effects M. Fechner, Ecole Normale Supérieure.
A New Upper Limit for the Tau-Neutrino Magnetic Moment Reinhard Schwienhorst      ee ee
Charm Mixing and D Dalitz analysis at BESIII SUN Shengsen Institute of High Energy Physics, Beijing (for BESIII Collaboration) 37 th International Conference.
Muon Energy reconstruction in IceCube and neutrino flux measurement Dmitry Chirkin, University of Wisconsin at Madison, U.S.A., MANTS meeting, fall 2009.
Mark Dorman UCL/RAL MINOS WITW June 05 An Update on Using QE Events to Estimate the Neutrino Flux and Some Preliminary Data/MC Comparisons for a QE Enriched.
Estimating standard error using bootstrap
I have 6 events (Nch>=100) on a background of ?
(Day 3).
The Maximum Likelihood Method
Confidence Interval Estimation
Jessica Hodges University of Wisconsin – Madison
South Pole Ice model Dmitry Chirkin, UW, Madison.
(2001) Data Filtering: UPDATE
Chapter 7: Sampling Distributions
Lecture 4 1 Probability (90 min.)
Erik Strahler UW-Madison 28/4/2009
Erik Strahler UW-Madison 4/27/2008
Unfolding Problem: A Machine Learning Approach
J. Braun, A. Karle, T. Montaruli
J. Braun, A. Karle, T. Montaruli
W Charge Asymmetry at CDF
Fluxes and Event rates in KM3NeT detectors
Confidence Interval Estimation
2000 Diffuse Analysis Jessica Hodges, Gary Hill, Jodi Cooley
Claudio Bogazzi * - NIKHEF Amsterdam ICRC 2011 – Beijing 13/08/2011
Using Single Photons for WIMP Searches at the ILC
Unfolding performance Data - Monte Carlo comparison
Dilepton Mass. Progress report.
Unfolding with system identification
Optimization of tower design
Presentation transcript:

Two Interpretations of What it Means to Normalize the Low Energy Monte Carlo Events to the Low Energy Data Atms MC Atms MC Data Data Signal Signal Apply the normalization to correct the theory that predicts the atmospheric neutrino flux. Do NOT apply the correction to the signal since it was meant for only atmospheric neutrinos. Apply the normalization to correct for detector efficiency. DO apply the correction factor to the signal since we are correcting the entire detector efficiency or acceptance. Atms MC Atms MC Data Data Signal Signal

What does normalization mean? We have always been normalizing the background MC to the data to predict the high energy background. Does this mean we corrected the efficiency or the theory? If we believe we corrected the atmospheric neutrino theory by normalizing, then we would not normalize the signal. We would then have to calculate the error in the signal efficiency from first principles (like looking at the OM sensitivity, ice, etc...). If we believe we corrected the detector efficiency, then we must normalize the signal as well. This means that we have taken the theoretical error and projected it onto the signal efficiency.

I have generally gathered that this is the preferred option How do we finish this off? If we choose to believe that our normalization factor is correcting the theory that predicts the atmospheric neutrino flux, then we will not normalize or readjust the signal. In that case, we must estimate an error on the detector efficiency. At least some of this detector efficiency error can be taken care of by looking at what happens when the data and MC match perfectly for every parameter.

Modified Monte Carlo *shift all MC events by the following amounts to match the MC to the data 1.1 * Ndirc (number of direct hits) 1.08 * Smootallphit (smoothness) 1.05 * Median resolution 1.01 * Jkchi(up) – Jkchi(down) (likelihood ratio) Ldirb (track length), zenith angle and Nch are not changed in the MC PLOTS that show how the data and MC now match up for each parameter will follow at the end of the presentation

Bartol Max – Modified MC 594.8 9.7 I took the MC that was shifted and counted the number of events above and below 100 channels. Bartol Max 670.1 13.3 Bartol Central 533.6 9.1 Bartol Min 397.1 4.9 Bartol Max – Modified MC 594.8 9.7 Bartol Central – Modified MC 474.2 6.7 Bartol Min – Modified MC 353.5 3.6 Honda Max 525.3 9.3 Honda Central 419.6 6.4 Honda Min 314.0 3.4 Honda Max – Modified MC 383.1 5.5 Honda Central – Modified MC 307.0 3.8 Honda Min – Modified MC 230.8 2.1 WE NOW HAVE ALL THE INGREDIENTS TO PUT IN GARY'S PROGRAM TO CALCULATE A LIMIT. (more on this in a minute) Nch<100 Nch>=100

How to calculate a limit when there are no errors.... PDF is coming out of the page s (signal strength) x (number observed) PDF is simply the Poisson formula: P (xobs | s + b) = (s+b)x exp[ - (s+b)] / xobs! s (signal strength) The confidence belt is constructed by taking the F.C. 90% interval at each value of the signal strength. (I do this in steps of 0.01 in signal strength.) x (number observed)

s (signal strength) event upper limit 6 x (number observed) The event upper limit is the largest value of the signal strength that falls in the confidence belt for the actual number of events observed in the experiment (6 in my case).

s (signal strength) The new confidence belt is wider when uncertainties in the signal and background are considered. new event upper limit old event upper limit 6 x (number observed) Gary's program works by recomputing the confidence belt for the case where uncertainties are included. For a given number of actual events observed, the event upper limit is higher when errors are included when constructing the confidence belt.

E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig The limit is: E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig where n_sig = number of signal events that remain in my final data set and E2 * test flux = 10-6

 How Gary's Program Works I give it an input matrix. It has 3 columns and as many rows as I want to give it. The sum of the values in the 3rd column (the weights) should be 1.0. # Background Events in Final Sample (Nch>=100) Signal (Detector) Efficiency Weight given to each scenario b1 e1 w1 = 1/N b2 e2 w2 = 1/N bN eN wN = 1/N (s * ei + bi)x e – (s * ei + bi) N  P (x | s+b) = x! i N The program computes the pdf for discrete values of n_observed at every value of the signal strength in steps of 0.01. for s (signal) = 0.00, n_obs = 0,1,2,3,....., 40 s (signal) = 0.01, n_obs = 0,1,2,3,....., 40 s (signal) = 0.02, n_obs = 0,1,2,3,....., 40 s (signal) = 0.03, n_obs = 0,1,2,3,......, 40 ... s (signal) = 15.0, n_obs = 0,1,2,3,....., 40

Consider a single slice of this plot where the signal strength is 6.5 The pdf is calculated for each line that I input into the program. The average pdf is the output. For a given value of the signal, the pdf gets fatter when you allow the detector efficiency ei to take on values different than 1.0. Feldman-Cousins Confidence Interval Signal Consider a single slice of this plot where the signal strength is 6.5 x (number observed) Here you see how the pdf varies when only the signal efficiency is varied (input background is constant).

In order to calculate the new limit with uncertainties, you must consider what happens when both the signal and background are allowed to vary.

IF we believe that we are normalizing to fix the flawed atmospheric neutrino theory, we must come up with an error on the detector efficiency (for instance, 10% or 30%) For any given background prediction, we must consider what would happen if the detector efficiency was 90%, 100% or 110%. (if we assume a 10% error) This leads to many possible combinations of the signal and the background that we can input into the program.

Assuming an additional 30% detector efficiency in any direction Assuming an additional 30% detector efficiency in any direction... (no signal normalization based on the atms nu) Column I Input Column 2 Input Column 3 Input

event upper limit for 6 observed

E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig The limit is: E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig where n_sig = number of signal events that remain in my final data set and E2 * test flux = 10-6 E2 * flux < 10-6 * 5.86 / 66.7 E2 * flux < 8.8 * 10-8 This assumes that there is a 30% error in any direction on the detector (signal) efficiency.

If you would like to consider a 10% error on the efficiency (instead of 30%), follow the same procedure.

Assuming an additional 10% detector efficiency in any direction Assuming an additional 10% detector efficiency in any direction... (no signal normalization based on the atms nu) Column I Input Column 2 Input Column 3 Input

event upper limit for 6 observed

E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig The limit is: E2 * flux < (E2 * test flux) * (event upper limit ) / n_sig where n_sig = number of signal events that remain in my final data set and E2 * test flux = 10-6 E2 * flux < 10-6 * 5.43 / 66.7 E2 * flux < 8.1 * 10-8 This assumes that there is a 10% error in any direction on the detector (signal) efficiency.

Here you see the Nchannel distribution for the data compared to normal MC. Here you see the Nchannel distribution for the data as compared to the MC that was cut based on the 4-parameter MC shifts. *Note: Shifting the MC on those 4 parameters did not remove or add enough events to mess up the Nchannel distribution (which is a good thing.)

Next I will show what each distribution looks like when the modified cuts are applied. If it is a MC parameter that was shifted, I will be plotting the shifted MC with the (unshifted) data. They are N-1 plots.

LOG BARTOL WITH L.R. SHIFTED BY 1.01 IN MC DATA: ndirc>13 && ldirb > 170 && abs(smootallphit) < 0.250 && med_resol. < 4.0 && Zenith > 100 MC: 1.1*ndirc > 13 && ldirb > 170 && abs(1.08*smootallphit) < 0.250 && 1.05*med_resol. < 4.0 && Zenith > 100 LOG BARTOL WITH L.R. SHIFTED BY 1.01 IN MC LINEAR BARTOL WITH L.R. SHIFTED BY 1.01 IN MC LINEAR HONDA WITH L.R. SHIFTED BY 1.01 IN MC LOG HONDA WITH L.R. SHIFTED BY 1.01 IN MC

LINEAR BARTOL – NO SHIFT ON LDIRB IN MC DATA: ndirc>13 && abs(smootallphit) < 0.250 && med_resol. < 4.0 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: 1.1*ndirc > 13 && abs(1.08*smootallphit) < 0.250 && 1.05*med_resol. < 4.0 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL – NO SHIFT ON LDIRB IN MC LOG BARTOL – NO SHIFT ON LDIRB IN MC LINEAR HONDA – NO SHIFT ON LDIRB IN MC LOG HONDA – NO SHIFT ON LDIRB IN MC

LINEAR BARTOL WITH NDIRC SHIFTED BY 1.1 IN MC DATA: ldirb > 170 && abs(smootallphit) < 0.250 && med_resol. < 4.0 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: ldirb > 170 && abs(1.08*smootallphit) < 0.250 && 1.05*med_resol. < 4.0 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL WITH NDIRC SHIFTED BY 1.1 IN MC LOG BARTOL WITH NDIRC SHIFTED BY 1.1 IN MC Since ndirc is discrete, multiplying the MC by 1.1 causes a binning effect. LINEAR HONDA WITH NDIRC SHIFTED BY 1.1 IN MC LOG HONDA WITH NDIRC SHIFTED BY 1.1 IN MC

LINEAR BARTOL WITH smootallphit SHIFTED BY 1.08 IN MC DATA: ndirc>13 && ldirb > 170 && med_resol. < 4.0 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: 1.1*ndirc > 13 && ldirb > 170 && 1.05*med_resol. < 4.0 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL WITH smootallphit SHIFTED BY 1.08 IN MC LOG BARTOL WITH smootallphit SHIFTED BY 1.08 IN MC LINEAR HONDA WITH smootallphit SHIFTED BY 1.08 IN MC LOG HONDA WITH smootallphit SHIFTED BY 1.08 IN MC

LINEAR BARTOL WITH MEDRES SHIFTED BY 1.05 IN MC DATA: ndirc>13 && ldirb > 170 && abs(smootallphit) < 0.250 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: 1.1*ndirc > 13 && ldirb > 170 && abs(1.08*smootallphit) < 0.250 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL WITH MEDRES SHIFTED BY 1.05 IN MC LOG BARTOL WITH MEDRES SHIFTED BY 1.05 IN MC LINEAR HONDA WITH MEDRES SHIFTED BY 1.05 IN MC LOG HONDA WITH MEDRES SHIFTED BY 1.05 IN MC

LINEAR BARTOL – ZENITH NOT SHIFTED IN MC DATA: ndirc>13 && ldirb > 170 && abs(smootallphit) < 0.250 && med_resol. < 4.0 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: 1.1*ndirc > 13 && ldirb > 170 && abs(1.08*smootallphit) < 0.250 && 1.05*med_resol. < 4.0 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL – ZENITH NOT SHIFTED IN MC LOG BARTOL – ZENITH NOT SHIFTED IN MC LINEAR HONDA – ZENITH NOT SHIFTED IN MC LOG HONDA – ZENITH NOT SHIFTED IN MC

LINEAR BARTOL – NCH NOT SHIFTED IN MC DATA: ndirc>13 && ldirb > 170 && abs(smootallphit) < 0.250 && med_resol. < 4.0 && 2D L.R. vs. Zenith Cut && Zenith > 100 MC: 1.1*ndirc > 13 && ldirb > 170 && abs(1.08*smootallphit) < 0.250 && 1.05*med_resol. < 4.0 && 2D (1.01*L.R.) vs. Zenith Cut &&Zenith > 100 LINEAR BARTOL – NCH NOT SHIFTED IN MC LOG BARTOL – NCH NOT SHIFTED IN MC LINEAR HONDA – NCH NOT SHIFTED IN MC LOG HONDA – NCH NOT SHIFTED IN MC

Bartol 2003 Modified OM sensitivity.... Red = 130% OM sens Black = 100% Blue = 70% Nch<100 Nch>=100 70% 71.2 1.5 100% 180.5 2.9 130% 381.2 9.5

IF we believe that normalizing based on the atmospheric neutrino flux gives us a correction on the detector efficiency, then we must normalize the signal.

In general, there are three situations we can compare. 1) No Errors 7.8 1.0 1.0 2) 30% Errors 9.1 1.3 1/9 9.1 1.0 1/9 9.1 0.7 1/9 7.8 1.3 1/9 7.8 1.0 1/9 7.8 0.7 1/9 5.7 1.3 1/9 5.7 1.0 1/9 5.7 0.7 1/9 3) Anti-correlated errors due to normalizing the signal with the same factor as the atmospheric neutrino MC 9.1 0.76 1/3 7.8 0.96 1/3 5.7 1.28 1/3

sig = 0.1

sig = 1.0

sig = 6.5

sig = 10