Throttling: Infrastructure, Dead Time, Monitoring

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Automatic Interface Generation P.I.G. : Presented by Trevor Meyerowitz Sonics: Presented by Michael Sheets EE249 Discussion November 30, 1999.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
TFC Partitioning Support and Status Beat Jost Cern EP.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
Status of Global Trigger Global Muon Trigger Sept 2001 Vienna CMS-group presented by A.Taurok.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Configuration database status report Eric van Herwijnen September 29 th 2004 work done by: Lana Abadie Felix Schmidt-Eisenlohr.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Vienna Group Discussion Meeting on Luminosity CERN, 9 May 2006 Presented by Claudia-Elisabeth Wulz Luminosity.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
18/05/2000Richard Jacobsson1 - Readout Supervisor - Outline Readout Supervisor role and design philosophy Trigger distribution Throttling and buffer control.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
ATLAS SCT/Pixel TIM FDR/PRR28 June 2004 TIM Requirements - John Lane1 ATLAS SCT/Pixel TIM FDR/PRR 28 June 2004 Physics & Astronomy HEP Electronics John.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
General Tracker Meeting: Greg Iles4 December Status of the APV Emulator (APVE) First what whyhow –Reminder of what the APVE is, why we need it and.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
DAQ Summary Beat Jost Cern EP. Beat Jost, Cern 2 Agenda (Reminder)
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
PC-based L0TP Status Report “on behalf of the Ferrara L0TP Group” Ilaria Neri University of Ferrara and INFN - Italy Ferrara, September 02, 2014.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
of the Upgraded LHCb Readout System
Electronics Trigger and DAQ CERN meeting summary.
CERN meeting report, and more … D. Breton
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
The Software Framework available at the ATLAS ROD Crate
CS 286 Computer Organization and Architecture
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
The LHCb Event Building Strategy
The LHCb Run Control System
LHCb Trigger and Data Acquisition System Requirements and Concepts
M. Krivda for the ALICE trigger project, University of Birmingham, UK
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
Philippe Vannerem CERN / EP ICALEPCS - Oct03
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Network Processors for a 1 MHz Trigger-DAQ System
PID meeting Mechanical implementation Electronics architecture
The LHCb Front-end Electronics System Status and Future Development
Use Of GAUDI framework in Online Environment
TELL1 A common data acquisition board for LHCb
Chapter 13: I/O Systems.
The Trigger Control System of the CMS Level-1 Trigger
Presentation transcript:

Throttling: Infrastructure, Dead Time, Monitoring Beat Jost Cern EP

TFC Architecture

Problem Description (I) LHCb readout protocol is pure push-through, i.e. each source of data sends data without knowledge of buffer state in the destination if destination buffer run short, data transfers have to be stopped Done by disabling the trigger (Throttle) Buffers at various levels Level-0 pipeline Level-0 de-randomizing buffer Level-1 trigger buffers Level-1 pipeline Level-1 de-randomizing buffer FEM buffers, RUs, SFCs, Farm CPUs

Problem Description (II) Central buffer control is no problem as long as all the buffers are filled/emptied synchronously (e.g. L0 de-randomizer) or filled synchronously and emptied with a maximum latency (e.g. L1 de-randomizer) can lead to unnecessary throttling... De-centralized buffer control poses problem of numbers of sources ~1000 L1 electronics boards ~x00 FEM modules ~100 RU modules ~100 SFCs

Proposal L0 Pipeline L0 de-randomizers Level-1 Buffers No problem, L0 trigger has fixed latency L0 de-randomizers monitored centrally by Readout Supervisor. Throttling L0 trigger internally Level-1 Buffers handled by timeout in Level-1 trigger (maximum processing time) Level-1 de-randomizers monitored locally and throttling L1 trigger via hardware signal to RS Level-1 Trigger buffers monitored locally and throttling L0 trigger via hardware signal to RS FEM/RU buffers SFC (and CPU) buffers monitored locally and throttling L1 trigger via controls system (SW)

Throttling Support Hardware Throttles Software Throttles RS has inputs for throttle signals for L0 and L1 trigger TFC switch has two reverse paths for L0 and L1 throttles (don’t forget partitioning!!) to cope with the many sources of throttle signal a module performing basically a logical OR of the inputs will be needed (should be no problem) Software Throttles The ECS interface to the RS will allow to throttle L0 or L1 triggers (prob. only throttling of L1 trigger will be used) Side remark: Originally it was foreseen that all throttling would be done through the ECS system. Long and variable latency makes this difficult to implement (complicated algorithms).

Monitoring The RS will count the lost events (i.e. the number of events for which a positive trigger decision has been converted to a negative trigger decision) for L0 and L1 hardware and software throttles separately. In addition the total number of events lost in the two cases (L0 and L1) will be counted. The RS will also count the number of BXs during L0 throttling The RS will implement a programmable throttle timeout after which an alarm is raised to the ECS. The TFC switch will register the time (differentially and integrated) for which the throttle is asserted for each throttle source (history?) The Throttle ORs will gave the same monitoring information for each port as the TFC switch. All this information will be available to the ECS for monitoring/alarming

Hardware Setup SD 1 SD 2 L1E L1E Throttle OR Throttle OR TTCtx TTCtx Throttle OR TFC Switch There will be an independent “Throttle tree” for L0 and L1 per sub-detector (if needed) Throttle path has to follow TTC path (partitioning) RS

Issues Throttling philosophy agreed? Throttling architecture agreed? Sufficient Monitoring? AOI?