Pam Klabbers University of Wisconsin - Madison

Slides:



Advertisements
Similar presentations
GCT Software Jim Brooke GCT ESR, 7 th November 2006.
Advertisements

Summary Ted Liu, FNAL Feb. 9 th, 2005 L2 Pulsar 2rd IRR Review, ICB-2E, video: 82Pulsar
RPC Readiness for Data-taking RPC Collaboration 1.
Web Based Monitoring DT online shifter tutorial Jesús Puerta-Pelayo CIEMAT Muon_Barrel_Workshop_07/July/10.
RPC PAC Trigger system installation and commissioning How we make it working… On-line software Resistive Plate Chambers Link Boxes Optical Links Synchronization.
Cluster Finder Report Laura Sartori (INFN Pisa) For the L2Cal Team Chicago, Fermilab, Madrid, Padova, Penn, Pisa, Purdue.
Detector Diagnostics Calibration Analysis Ped/LED/Laser RadDam Analysis Detector Optimization Lumi Detector Performance Monitoring DQM On/Offline Prompt.
Muon Electronics Upgrade Present architecture Remarks Present scenario Alternative scenario 1 The Muon Group.
RPC DQA but also Monitoring for the DCS group: status and prospective for Marcello Bindi RPC L1MU Barrel DQM - 08/05/2013.
AMB HW LOW LEVEL SIMULATION VS HW OUTPUT G. Volpi, INFN Pisa.
HCAL DPG Status1 Olga Kodolova / Frank Chlebana HCAL DPG Status Olga Kodolova for the HCAL DPG October 20, 2011.
D0 Status: 01/14-01/28 u Integrated luminosity s delivered luminosity –week of 01/ pb-1 –week of 01/ pb-1 –luminosity to tape: 40% s major.
Latest News & Other Issues Ricardo Goncalo (LIP), David Miller (Chicago) Jet Trigger Signature Group Meeting 9/2/2015.
1 Triggering on Diffraction with the CMS Level-1 Trigger Monika Grothe, U Wisconsin HERA-LHC workshop March 2004 Need highest achievable LHC Lumi, L LHC.
All Experimenters MeetingDmitri Denisov Week of July 7 to July 15 Summary  Delivered luminosity and operating efficiency u Delivered: 1.4pb -1 u Recorded:
MICE CM28 Oct 2010Jean-Sebastien GraulichSlide 1 Detector DAQ o Achievements Since CM27 o DAQ Upgrade o CAM/DAQ integration o Online Software o Trigger.
Pixel DQM Status R.Casagrande, P.Merkel, J.Zablocki (Purdue University) D.Duggan, D.Hidas, K.Rose (Rutgers University) L.Wehrli (ETH Zuerich) A.York (University.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
STAR Analysis Meeting, BNL – oct 2002 Alexandre A. P. Suaide Wayne State University Slide 1 EMC update Status of EMC analysis –Calibration –Transverse.
TEL62 AND TDCB UPDATE JACOPO PINZINO ROBERTO PIANDANI CERN ON BEHALF OF PISA GROUP 14/10/2015.
Jan 7, 2002E. Gallas/Trigger Db1 Trigger Database and Trigger Configurations and Trigger Issues Elizabeth Gallas, Jeremy Simmons (Fermilab - Computing.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
1 Status of Validation Board, Selection Board and L0DU Patrick Robbe, LAL Orsay, 19 Dec 2006.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
Summary of IAPP scientific activities into 4 years P. Giannetti INFN of Pisa.
Sundry LHC Machine Development starts 19 June –Original plan to have 90m comm. next week was torpedoed by private discussions between spokesperson and.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
CT-PPS DB Info (Preliminary) DB design will be the same as currently used for CMS Pixels, HCAL, GEM, HGCAL databases DB is Oracle based A DB for a sub-detector.
L1Calo DBs: Status and Plans ● Overview of L1Calo databases ● Present status ● Plans Murrough Landon 20 November 2006.
M4 Operations ● Operational model for M4 ● Shifts and Experts ● Documentation and Checklists ● Control Room(s) ● AOB Murrough Landon 24 July 2007.
E. Hazen1 HCAL DAQ To Do List Here come the protons! But still lots to do... DQM Tweaks HO Trigger Links New DCC Selective readout.
Rainer Stamen, Norman Gee
SharePoint 101 – An Overview of SharePoint 2010, 2013 and Office 365
CMS Trigger Improvements towards Run II
P. Paolucci G. Pugliese M.Tytgat
HCAL Database Goals for 2009
Central Online DQM Shift Tutorial March 2017, CMS DQM group
RPC Data Certification
904 Status Recall last Group Meeting…
The Development Process of Web Applications
Enrico Gamberini for the GTK WG TDAQ WG Meeting June 01, 2016
CMS High Level Trigger Configuration Management
uGT 2016 operations and outlook for 2017
Chapter 18 Maintaining Information Systems
L1Calo upgrade discussion
Calibration Status of the art – the first tTrig calculated from data
Controlling a large CPU farm using industrial tools
Pixel DQM Status & Plans
Online Software Status
ATLAS L1Calo Phase2 Upgrade
Online Software Status
Remaining Online SW Tasks
Status of Full Simulation for Muon Trigger at SLHC
Conditions Data access using FroNTier Squid cache Server
Muon DPG 2012 Camilo Carrillo (INFN Napoli) Tim Cox (UC Davis)
Data Quality Monitoring of the CMS Silicon Strip Tracker Detector
CMS Feedback on the High pileup (PU) Fill
IQA Improvement Work.
Operational and Postimplementation
The First-Level Trigger of ATLAS
John Harvey CERN EP/LBC July 24, 2001
CMS Pixel Data Quality Monitoring
RPC Detector Control System: towards the final system
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
DQM for the RPC subdetector
Geant4 Documentation Geant4 Workshop 4 October 2002 Dennis Wright
Pierluigi Paolucci & Giovanni Polese
Plans for the 2004 CSC Beam Test
CSC Shift Training and Operation + B904 Status
Presentation transcript:

Pam Klabbers University of Wisconsin - Madison Level-1 Trigger Pam Klabbers University of Wisconsin - Madison

2016/17 CMS Level-1 Trigger * * * Not shown – fiber optic splitting and patch panels *install in 2017

Operations 2016 Started the year with the most of the legacy trigger in for one MWGR, then commissioned a completely new trigger system All mTCA format Mostly optical interconnections In expert mode for most of first half of 2016 Tools for diagnosis evolved through the year Shifters’ and L1 DOCs’ tools changed Data Quality Monitoring Online all new for 2016 subsystems Similar plots, but not always easy to find Deployed selection to main shifter panel Hardware enhancements (highlights) mGT – now operating in multi-board mode, up to 512 algos possible Calo L1,L2 – redundant node functionality, quick switchover if a link goes down Muons – Commissioned, Superprimitives (RPC+DT), asst’d improvements

Online Software 2016 New SWATCH library providing common control software for the L1T upgrade New DB schema Issues addressed quickly Very frequent updates during initial deployment Additions throughout the year e.g. rates monitoring (below) Updated TS JavaScript library Drop Dojo, fully moved to Polymer Adapted central tools (L1 Page, L1CE) to include new sub-systems

Menu in 2016 Tuned with rates from LS with expected pileup or extrapolated from fits to pileup Feedback from L1 & HLT used to adjust balance of triggers Z. Wu

Overall Official* Downtime pp pPb pp Physics 8% of all downtimes, 126 pb-1 lumi lost (inc. HLT 1.5 pb-1) 2012: lost 149 pb-1 due to trigger, 14% of all down time! pPb Physics 12% of all downtimes Bulk due to reconfiguration 40 fb-1 in 2016 20 fb-1 in 2012! *Category TRIGGER in WBM

Major Downtimes pp pPb pp Physics pPb Physics Calo Layer-2 roll back of software 81 pb-1 EMTF PC crash 17.5 pb-1 pPb Physics Two reconfigurations for vdM scan – 330 mb-1 New L1/HLT mode for reducing output to 6.5 GB/s – 67 mb-1

TRIG_DAQ Downtimes *But don’t get all cozy just yet…another category: Dig a little deeper and some issues are classified as DAQ-> TRIG_DAQ Without filtering for HLT or unrelated issues, this was ~78 pb-1 19.7 pb-1 30.6 had trouble configuring, mGMT, OMTF, Just not a good start of fill for CMS - other sub-detector problems may be lumped into this 19.4 pb-1 22.7 EMTF issues, fix for high pT ineff. DAQ r/o prob. Crate power loss. 17.7 pb-1 24.7 OMTF timeouts, had to call expert for help

Operational Plans 2017 mGT (M. Jeitler et al.) Firmware Invariant mass, transverse mass, overlap removal, additional objects Suppress calibration triggers Delayed due to inconsistent BGo definitions between AMC13 and MP7 Perhaps double-b-tagging – 2 muons in a jet Allow use of lead objects of collection only to save resources Software Streamline – e.g. easier use of BX mask Keep Trigger Menu Editor up-to-date with firmware capabilities Improve monitoring compare and publish in DQM Calo Layer-2 and mGMT output vs mGT input AMC13 global FinOR “out” with TCDS FinOR “in” Hardware 3rd MP7 installed (possibly up to 6, depends on available inputs) Additional FinOR AMC502 installed Additional Patch Panels mGT test crate with some fiber inputs

Operational Plans 2017 TwinMux BMTF OMTF Integration of HO – develop algorithm and emulate, implement FW, HO fibers to RPC PP/Splitters DAQ Improvements – record 2 output segments, bit denoting chamber-half Spy Buffer for DT input RPC – tune RPC hit timing (on RPC link boards), optimize cluster size/hit eff. Achieve 100% data-to-emulator agreement BMTF New ETTF with finer h resolution running at 160 MHz BDT pT assignment at 160 MHz These will reduce latency by 2 BX, better h resolution, expect lower m rate with same eff. OMTF DAQ already working during last pPb runs, currently validating Get 100% (currently 99.45%) data-to-emulator agreement Algorithm performance improvement

Operational Plans 2017 EMTF CPPF mGMT Training a new MVA for pT assignment – improve performance, optimize use of RPCs New FW with RPC inclusion – new DAQ, streamline existing logic Continuing CPPF development –transmission tests in 904, iron out the details CPPF Tests ongoing in 904 Test RPC receiving at P5 (end of Jan) Install in P5 in EMTF Sorter crate, RPC fibers reconnected SWATCH Development and integration with central cell mGMT Zero Suppression Extrapolation of f to the IP Isolation using 5x5 calo sums - studies ongoing, final solution not clear (e.g. DEMUX on calo side) Ghost busting/double muons performance

Operational Plans 2017 Calo Layer-1 Calo Layer-2 (G. Iles et al.) Minor improvements to error handling in SWATCH Updates to calibrations/scale-factors Work with ECAL TP experts regarding two issues Link errors at LHC ramp, one single tower error HCAL fiber mapping change – change 72 fibers at Layer-1 PP Calo Layer-2 (G. Iles et al.) Firmware and configuration changes : Firmware fix for HT saturation Possible fix for saturation in other objects Possible changes to MET pending PU dependence studies Possible addition of fat jets Possible isolation for muons Operational changes : Updated DQM, including emulator Improved firmware validation workflow

DQM and Online Software 2017 More general and system-level improvements, faster updates WBM Look into improvements for L1/HLT synchronicity Move to CC7 and XDAQ 14 Start testing subsystems TS and SWATCH by mid-Jan New Configuration Editor and updated database schema Aim for just after 2017 MWGR 1 (mid-Feb) Tools for xml editing Improve current DB schema – tracking, duplication, deprecation Restore Run 1 L1CE functionalities New modular architecture L1 Page (Shifter Interface) Update Also aiming for MWGR 1 Modern technologies and design Cleaner user interface Better alerts, subsystem status, and shifter reminder Manpower decreasing this year, entering consolidation phase

L1CE

Summary Muon updates including hardware changes Install CPPF, RPC data to EMTF will improve performance Second DeMux for muon Isolation, fibers installed, under study HO to TwinMux, fibers going in, under study Numerous updates to mGT planned Fewer updates to Calo Layer 2 and Layer 1 HCAL Latency Increase (currently 2-3 BX) 2017 needs to be a consolidation year for L1 Need to be stricter and stick to workflows for Updates and improvements (including menus) DQM – get updates online more quickly WBM – cannot change data formats as in the past Software – additional safety checks, monitoring, alarms Should aim to make systems less expert in 2017

Backup

EMTF with RPC (CPPF)

L1 Shifts and DOCs Needs for 2017 DOC 1 Three MWGRs: 3 short weeks (3 days, no weekends) Weeks 13-50 : 38 weeks Full list of volunteers, allocating weeks now… DOC 2 See Takashi’s talk – monitor rates as a function of PU, prompt certification in 24h using express streams DOC 3 – Previously called trigger offline shifts Monitors detector performance, fills in RR, release validation with RelVal DQM Shifts Full for first half of 2017 (oversubscribed) Next call ~March 2017

Lessons Learned 2016 & Wish List… Updates and configuration changes Even “small” changes caused unexpected behavior Not always obvious at first glance Test vectors/patterns should be enhanced Do tests at end of fill before final deployment Some changes were not announced Experts need to stay in touch with L1 DOCs and Trig. Tech. Coord. Coupling changes not ideal e.g. New layer-1 corrections Improved tau and e/gamma, but caused PU dependent MET behavior Careful with keys (L1 DOCs and Experts) Wrong key used for update, typos in XML, old XML… Need better ways to spot problems (“diff”, non-XML view) L1 Online SW group is thinking about this Be ready to roll back (any change) in case of problems

Lessons Learned 2016 & Wish List… Updates and configuration changes (continued) Menus, including prescale tables, algo mask, BX mask… Workflow well defined, started an L1 DOC checklist… Lots to update when menu changes, can be confusing Mostly smooth, some issues: “Compatible” menu had a bit missing, triggers added, no prescales Menus tested without warning – errors in HLT, etc. Communication is key! Shifter Timing issues not noticed Timing plots now in L1T Quick Collection (Trigger shifter view) Additional emphasis in tutorial Holes in detectors not noticed More plots in QC, L1T groups should use main L1T DQM Summary Also more emphasis in tutorial Rates wish – mGMT input from TFs, could use a more “generic” panel

Lessons Learned 2016 & Wish List… Shifter (continued) Wrong prescale column - mGT preserve column between runs? Expert Contacts – few subsys. have only a list of names, generic # possible? Shifters in general Selection more stringent this year Trainer a bit burnt out Maybe migrate some training to sir.cern.ch ATLAS (right) has already done this Advantage – quizzes Disadvantage – no personal interaction L1 DOC Very difficult to fill for the first part of the year Changes not always known to L1 DOC Every change needs to go through DOC to RC before action If more urgent DOC calls RFMs to get approval Do not assume that DOC knows what tools are available! Playing “telephone” game with DOCs doesn’t work – write it down!!!

L1/HLT Prescales (from HLT) We often had problem with the set of L1 and HLT columns The procedure involved 4 players L1 DOC HLT DOC TSG STORM/STEAM group (offline) L1 DPG The regular way of proceeding is L1 DPG group proposes a set L1 prescales and columns TSG/STEAM elaborate on those, revise and propose modifications, plus compiles the HLT prescales STORM implement in confdb and put in the offline menu, i.e. ready for next menu FOG apply it online for HLT and passes the Google Doc with prescale to the L1 DOC As L1 and HLT DOC con make changes on the fly, many problems are raised when these changes are not communicated back, for example to STORM We need to think of a possible improvement in the workflow to prevent these kinds of mistakes from happening