S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades DAQ upgrades at SLHC.

Slides:



Advertisements
Similar presentations
J. Varela, CERN & LIP-Lisbon Tracker Meeting, 3rd May Partitions in Trigger Control J. Varela CERN & LIP-Lisbon Trigger Technical Coordinator.
Advertisements

Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Dynamic configuration of the CMS Data Acquisition Cluster Hannes Sakulin, CERN/PH On behalf of the CMS DAQ group Part 1: Configuring the CMS DAQ Cluster.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
DAQ Overview + selected Topics Beat Jost Cern EP.
Monitoring for the ALICE O 2 Project 11 February 2016.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
MPD Data Acquisition System: Architecture and Solutions
LHCb and InfiniBand on FPGA
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
Electronics Trigger and DAQ CERN meeting summary.
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
DAQ upgrades at SLHC S. Cittolin CERN/CMS, 22/03/07
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
8-layer PC Board, 2 Ball-Grid Array FPGA’s, 718 Components/Board
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Data Concentrator Card and Test System for the CMS ECAL Readout
TELL1 A common data acquisition board for LHCb
Presentation transcript:

S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades DAQ upgrades at SLHC

S. Cittolin CMS/PH-CMD Parameters Collision rate40 MHz Level-1 Maximum trigger rate100 kHz Average event size ≈1 Mbyte Data production≈Tbyte/day TDR design implementation No. of In-Out units 512 Readout network bandwidth≈1 Terabit/s Event filter computing power≈10 6 SI95 No. of PC motherboards≈Thousands DAQ baseline structure (SLHC: x 10)

S. Cittolin CMS/PH-CMD PS: : Minicomputers Readout custom design First standard: CAMAC kByte/s LEP: : Microprocessors HEP standards (Fastbus) Embedded CPU, Industry standards (VME) MByte/s LHC: 200X: Networks/Grids IT commodities, PC, Clusters Internet, Web, etc. GByte/s Evolution of DAQ technologies and structures Event building Detector Readout On-line processing Off-line data store

CMS/SC TriDAS Reviw AR/CR05 DAQ-TDR layout Architecture: distributed DAQ An industry based Readout Network with Tbit/s aggregate bandwidth between the readout and event filter DAQ nodes (FED&FU) is achieved by two stages of switches interconnected by layers of intermediate data concentrators (FRL&RU&BU). Myrinet optical links driven by custom hardware are used as FED Builder and to transport the data to surface (D2S, 200m) Gb Ethernet switches driven by standard PCs is used in the event builder layer (DAQ slices and Terabit/s switches). Filter Farm and Mass storage are scalable and expandable clusters of services Architecture: distributed DAQ An industry based Readout Network with Tbit/s aggregate bandwidth between the readout and event filter DAQ nodes (FED&FU) is achieved by two stages of switches interconnected by layers of intermediate data concentrators (FRL&RU&BU). Myrinet optical links driven by custom hardware are used as FED Builder and to transport the data to surface (D2S, 200m) Gb Ethernet switches driven by standard PCs is used in the event builder layer (DAQ slices and Terabit/s switches). Filter Farm and Mass storage are scalable and expandable clusters of services Equipment replacements (M+O): FU-PCs every 3 years Mass storage 4 years DAQ-PC &networks 5 years Equipment replacements (M+O): FU-PCs every 3 years Mass storage 4 years DAQ-PC &networks 5 years

S. Cittolin CMS/PH-CMD 5 LHC --> SLHC Same level 1 trigger rate, 20 MHz crossing, higher occupancy..Same level 1 trigger rate, 20 MHz crossing, higher occupancy.. A factor 10 (or more) in readout bandwidth, processing, storageA factor 10 (or more) in readout bandwidth, processing, storage Factor 10 will come with technology. Architecture has to exploit itFactor 10 will come with technology. Architecture has to exploit it New digitizers needed (and a new detector-DAQ interface)New digitizers needed (and a new detector-DAQ interface) Need for more flexibility in handling the event data and operating the experimentNeed for more flexibility in handling the event data and operating the experiment

S.C. CMS-TriDAS 05-Mar-07 Computing and communication trends Gbit/IN 2 10Mbit/IN SLHC 1Gbit/IN 2 Mass storage Bandwidth

S.C. CMS-TriDAS 05-Mar-07 Industry Tbit/s (2004) Above is a photograph of the configuration used to test the performance and resiliency metrics of the Force10 TeraScale E-Series. During the tests, the Force10 TeraScale E-Series demonstrated 1 billion packets per second throughput, making it the world's first Terabit switch/router, and ZERO packet loss hitless fail-over at Terabit speeds. The configuration required 96 Gigabit Ethernet and eight 10 Gigabit Ethernet ports from Ixia, in addition to cabling for 672 Gigabit Ethernet and 56 Ten Gigabit Ethernet. asp Text

S.C. CMS-TriDAS BUBuilder Unit DCSDetector Control System EVBEVent builder EVMEvent manager FUFilter Unit FRLFront-end Readout Link FBFED Builder FESFront End System FECFront End Controller FEDFront End Driver GTPGlobal Trigger Processor LV1Regional Trigger Processors TPGTrigger Primitive Generator TTCTIming, Trigger and Control TTSTrigger Throttle System RCMSRun Control and Monitor RTPRegional Trigger Processor RU Readout Unit LHC-DAQ: trigger loop & data flow BUBuilder Unit DCSDetector Control System EVBEVent builder EVMEvent manager FUFilter Unit FRLFront-end Readout Link FBFED Builder FESFront End System FECFront End Controller FEDFront End Driver GTP Global Trigger Processor LV1 Regional Trigger Processors TPGTrigger Primitive Generator TTC TIming, Trigger and Control TTS Trigger Throttle System RCMS Run Control and Monitor RTPRegional Trigger Processor RU Readout Unit TTS (N to 1) sTTS: from FED to GTP collected by a FMM tree (~ 700 roots) aTTS: From DAQ nodes to TTS via service network (e.g. Ethernet) (~ 1000s nodes) Transition in few BXs TTC (1 to N) from GTP to FED/FES distributed by an optical tree (~1000 leaves) - LV1 20 MHz rep rate - Reset, B cmd 1 Mhz Rep rate - Controls data 1 Mbit/s FRL&DAQ networks (NxN) Data to Surface (D2S). FRL-FB 1000 Myrinet links and switches up to 2Tb/s Readout Builders (RB). RUBU/FU 3000 ports GbEthernet switches 1Tb/s sustained

S.C. CMS-TriDAS 9 LHC-DAQ: event description Event #(Central) Time (ABSolute) Run Number Event Type HLT info Full Event Data Storage Manager Event No. (LOCAL) Time (ABSolute) Run Number Event Type HLT info Full event data from GT data Filter Unit Event No. (LOCAL) Orbit/BX (LOCAL) FED data fragment TTC rx FED-FRL-RU Event No. (LOCAL) Orbit/BX (LOCAL) FED data fragment Event No. (LOCAL) Orbit/BX (LOCAL) FED data fragment from EVM Readout Unit EVB token BU destination FED data fragment Event fragments Built events

S.C. CMS-TriDAS 10 SLHC: event handling TTC : full event info in each event fragment distributed from the GTP-EVM via TTC Event #(Central) Time (ABSolute) Run Number Event Type HLT info Full Event Data EVM Distributed clusters and computing services FED-FRL interface e.g. PCI FED-FRL system able to handle high level data communication protocols (e.g. TCP/IP) FRL-DAQ commercial interface e.g. Ethernet (10 GBE or multi-GBE) to be used for data readout and FED configuration, control, local DAQ etc. New FRL design can be based on embedded PC (e.g. ARM like) or advanced FPGA etc. The network interface can be used in place of VME backplane to configure and control the new FED. Current FRL-RU systems can be used (with some limitations) for those detector not upgrading their FED design. Network

S.C. CMS-TriDAS 05-Mar In addition to the Level-1 Accept and to the other synchronous commands, the Trigger&Event Manager-TTC transmits to the FEDs the complete event description such as the event number, event type, orbit, BX and the event destination address that is the processing system (CPU, Cluster, TIER..) where the event has to be built and analyzed.... The event fragment delivery and therefore the event building will be warranted by the network protocols and (commercial) network internal resources (buffers, multi-path, network processors, etc.). FED-FU push/pull protocols can be employed... Real time buffers of Pbytes temporary storage disks will cover a real-time interval of days, allowing to the event selection tasks a better exploitation of the available distributed processing power SLHC DAQ upgrade (architecture evolution) Another step toward uniform network architecture: All DAQ sub-systems (FED-FRL included) are interfaced to a single multi-Terabit/s network. The same network technology (e.g. ethernet) is used to read and control all DAQ nodes (FED,FRL, RU, BU, FU, eMS, DQM,...)

S.C. CMS-TriDAS 05-Mar-07 SLHC: trigger info distribution Trigger synchronous data (20 MHz) (~ 100 bit/LV1) Clock distribution & Level 1 accept Fast commands (resets, orbit0, re-sync, switch context etc...) Fast parameters (e.g. trigger/readout type, Mask, etc.) Event synchronous data packet (~ 1 Gb/s, few thousands bits/LV1) Event Number64 bits as before local counters will be used to sychronize and check... Orbit/time and BX No.64 bits Run Number64 bits Event type256 bits Event destination(s) 256 bits e.g. list of IP indexes (main EVB + DQMs..) Event configuration256 bits e.g. information for FU event builder Event fragment will contain the full event description etc.. Events can be built in Push or Pull mode. E.g. 1. and 2. will allow many TTC operations be done independently per TTC partition (e.g. local re-sych, local reset, partial readout etc..) aSynchronous control information Configuration parameters (delays, registers, contexts, etc.) Maintenance commands, auto-test. Additional features? Readout control signal collector (a la TTS)? sTTS replacement. Collect FED status (at LV1 rate from >1000 sources?) Collect local information (e.g. statistics etc.) on demand? Maintenance commands, auto-test, statistics etc. Configuration parameters (delays, registers, contexts, etc.)

S.C. CMS-TriDAS 05-Mar Summary DAQ design Architecture: will be upgraded enhancing scalability and flexibility and exploiting at maximum (by M+O) the commercial network and computing technologies. Software: configuring, controlling and monitoring a large set of heterogeneous and distributed computing systems will continue to be a major issue Hardware developments Increased performances and additional functionality's are required by the event data handling (use standard protocols, perform selective actions etc.) - A new timing, trigger and event info distribution system - A general front-end DAQ interface (for the new detector readout electronics) handling high level network protocols via commercial network cards The best DAQ R&D programme for the next years is the implementation of the 2003 DAQ-TDR Also required a permanent contact between SLHC developers and LHC-DAQ implementors to understand and define the new systems scope, requirements and specifications

S.C. CMS-TriDAS 05-Mar-07 HLT output Level-1 Event rate DAQ data flow and computing model FRL direct network access Filter Farms & Analysis DQM&Storage servers EVENT BUILDER NETWORK Multi Terabit/s