Download presentation
Presentation is loading. Please wait.
1
Discussion of Various Issues w.r.t. Photons + MET analysis and its GM Interpretation Bruce Schumm, SCIPP/UCSC Editorial Review Meeting 01 April 2011 *) Final photon ID approach and results (mostly isolation) *) Pre-existing limits on neutralino mass, i.e., why we need only worry about masses about 150 *) Comparison of CMS vs. ATLAS limit and discussion *) Status of 3D grid with squarks (GGM parameter space) *) Model-independent limits discussion (what we might quote and why)
2
Fudge Factors Comparison of “fudge factor” corrections between UED and GGM analysts reveals bug in GGM code; now corrected and agreement good. Relative changes to efficiency in in 0-25% range, depending on photon E T, | |. Systematics approach changed: were taking errors directly from W+gamma note; are now extracting value via weighted average of information in Direct Photon note. Still a relatively small systematic. 1) Finishing Photon ID Studies (“Fudge Factors”, Isolation)
3
Isolation Here, we have completely re-tooled, due to disquietude about using electron shower shapes to guesstimate how often isolated photons fail isolation cut EtCone20/Et < 0.1 Follow procedure set forth in Direct Photon note, but for our specific isolation requirement…
4
tight not tight Compare difference distributions to MC Photons…
5
40 < E T < 60 | | < 0.6 Example: Photons with 40 < E T < 60 and | | < 0.6 Scale factors derived from ration of relative means of data, MC distributions For 24 bins in E T and , scale factor varies between 1.25 and 1.79 Apply global scale factor of 1.5 0.2 (all ET and ) to EtCone20 (in MC) to estimate signal efficiencies data MC isolated photons Cut = 0.1
6
Combined (Fudge Factor and Isolation) Efficiency Changes Dominated by fudge factor bug fix Relative differences are new-corrected divided by old-buggy 150200300400500600700800 4000.9851.0311.0581.123 5000.9881.0331.0821.1331.172 6001.0061.1161.171 6501.0161.1091.179 7001.0051.1361.213 8001.0521.1251.273 For systematics, see note. We know of no other systematics that we or others have suggested to look into.
7
2) Prior limits for Bino-like NLSP In our two-photon + MET search, we (and CMS) are setting limits on the mass of a Bino-like 0 NLSP. In the context of minimal GMSB, the D0 has set a limit of M 0 > ~175 GeV [Phys. Rev. Lett. 105 (2010) 221802] Since this arises from an assumption of direct production of the Bino-like neutralino (see next slide), we have taken this to be a universal limit on M 0 that applied to GGM as well. Thus, we do not explore the portion of GGM parameter space for which M 0 < 150. Concentrating on M 0 > 150 GeV gives us a more optimal search for the unexplored regions of GGM parameter space. In particular, compare our MET cut (125 GeV; optimized for M 0 > 150 ) to that of CMS (50 GeV)
8
Example: Strong vs. EW production vs E cm for minimal GMSB with M = 150 (M gluino = 910) ~1 fb at E cm = 2 TeV and is all 0 0
9
For M 0 = 150, M squark = 1500, CMS finds M gluino < 570 at 95% CL. Our limit for these parameters is 650. Why? 3) Comparison of ATLAS and CMS Limits (with OLD efficiency/acceptance!) ATLAS CMS limit Our limit
10
First, note that observed and expected limits are just about the same for both experiments Difference to be understood in terms of aspects of analysis, not statistical vagaries. To begin with, our results appear to be more optimal (focus on higher neutralino masses?). M 0 = 150 M 0 = 580 Photon Et spectrum for M gluino = 600 (MET spectrum, but should have similar behavior). Focusing on M 0 > 150 allows for a stiffer MET cut:
11
Expected Significance vs. MET Cut CMS ATLAS
12
Some Numbers from George (cribbed from email) “I have a simple Feldman-Cousins limit calculator. F-C doesn't take systematics into account. I get the following: CMS: nobs=1, nbkg=1.2 ATL: nobs=0, nbkg=0.15 95% CL upper limits are: 3.96 (CMS) and 2.94 (ATL) Taking the cross section as proportional to M -4, I get M(ATL)=1.08*M(CMS) Taking M(CMS)=570 from Fig 4 of the preprint, I get M(ATL)=615 GeV. OK, it's rough, but it gets you in the ballpark...” Seems to explain roughly half the difference. Anything else?
13
Compare efficiency for, e.g., M gluino = 600, M 0 = 150, M squark = 1500 ATLAS = 18.1% CMS = 13.8% ( ATLAS / CMS ) 1/4 = 1.07 Incorporating this factor, expected limit relative to CMS is (570)*(1.08)*(1.07) = 660 GeV close to observed limit of 670 GeV. Note that CMS requires | | 30. Also, from Jovan, further supporting efficiency/acceptance difference: Hi Bruce, For the comparison with CMS, I made the following counts for our 600/300 point: Out of 1594 events that pass our selection before the MET cut 705 have two unconverted photons 716 have one converted photon 172 have two converted photons CMS doesn't do photon recovery, but we shouldn't overstate our case. They only veto if there's a track in the pixels. Tracks after the pixels are allowed. I didn't put in a radius of convergence requirement on my count above.
14
Have consulted with CMS to ensure we follow same conventions Scripts from Shih et al., implemented in ISAJET Universal scale for all squark soft masses (small splittings introduced by running) Have obtained template (‘SLHA’) files from CMS for basis We propose to start with a private production today to check things, then submit official production next week. See potential grid next page (not yet confirmed) Note that it is likely we will have little sensitivity at 50 GeV bino mass due to high MET cut. 4) Status of 3D Grid Development
15
20001111 1 15001111 1 10001111 1 80011111 111 70011111111 60011111111 50011111111 40011111111 500600700800100015002000 For M_bino = 150, 500 (and 50?) M_squark M_gluino
16
Clearly, want to present limit contour plots In addition, CMS has presented a range of 95% upper limits of observable cross sections under assumption of always producing a neutralino NLSP that decays to photon + gravitino 0.3 pb < max < 1.1 pb where the range arises from the different acceptance/efficiency across the GGM grip There is no reason why we can’t do this; we would just have two ranges: one for GGM and one for UED (awkward in abstract?). Alternatively, we could quote a universal (GGM, UED, …) upper limit on the number of observed events, and then information (on WEB, not in paper) that would allow reader to extract max range Here’s what we might do if we wanted to go this route: 5) What should we present as our result?
17
In paper, we quote the (Frequentist [?]) limit on the number of events arising from the following statistic N obs = 0 N bkgrnd = 0.089 0.091 leading to a Frequentist upper limit N sig < [T.B.D. – about 2.3] In order for someone to convert this to a (grid-point dependent) limit on , they would need tables of Efficiency/acceptance as a function of grid point Systematic error on eff/acc/luminosity as a function of grid point To make use of the latter, they would need to employ a Frequentist limit algorithm, and in fact make use of N obs, N back. Thinking it through, then, it’s not clear how much value the N sig value would be. “Universal Limit” presentation
18
*) Comparison between the expected MC efficiency versus ET for photons from the SUSY signal for a benchmark choice of parameters and for prompt photons from gamma-jets [See next page for late-breaking update from Jovan] *) overall table of systematics which will be needed for the paper is missing still in the SUSY note. [Next page for comparison between UED/GGM errors] *) Combine UED and main document *) Rundown of differences in systematics between current analysis and that of published UED analysis *) put into the new support note the 1/R versus Lambda table 9 from the old support note and broaden the limit plot by adding these variations to the results. *) explain clearly the features of the GGM model with respect to general "perhaps academic" GMSB Summary of Ed Board Requests Page 1/2
20
ErrorUEDGGM Luminosity3.4% Trigger0.7% Shower Shapes0.9-3.1% (update?)New FF uncertainties? Isolation0.6% (update?)0.8-5.2% ET scaleNegligible OTX1.4% Shift modeling&bkgd0.5%1.4% Material2.1%2.3% Bad Conversions1.7% MET modeling0.9-9.1%0.8-10.9% Pileup3.5%1.3% PDFs8%12-21% MC Stats1.1-2.8%2.4-3.6% Summary of Errors
21
*) explain clearly the hidden assumptions which make it somewhat more complicated to produce a cross-section limit than anticipated *) explain why SPS8 is no longer relevant (or why LHC is not yet competitive) with some plots and numbers. [Late breaking – see next slide for SPS8 limits] *) discuss the strategy for publishing the tables of acceptances to derive possible cross-section limits. What do these limits end up being and how much do they vary depending on the model details? This should include the UED cross- section limits as well. *) explain why mBino has to be above 150 GeV from Tevatron EW Bino production limits. If this is indeed a strong limit in the context of GMSB models, then it should be stated in the support note and in the paper with a reference. *) produce the ATLAS limit for abseta < 1.4 in addition to the 50 GeV versus 125 GeV ETmiss cut Summary of Ed Board Requests Page 2/2
22
Wolfgang writes Hi Bruce, We now have numbers for SPS8 in the note. Most of the information is in the limit section in the appendix. We need 210 pb -1 to reach D0 sensitivity. The current sensitivity is Lambda > 90TeV, or M 0 > 124 GeV. This may clarify one of Daniel’s comments. If you want you can mention/discuss it in the meeting today. All is in SVN. SPS8 Limits
23
Production cross-section (7TeV) Wino - like Neutralino: |M2|<< and |M2| < |M1| Natural for photon+lepton channel Not shown: Higgsino, which has no photonic decay TRIGGERS?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.