1 Network Processor based RU Implementation, Applicability, Summary Readout Unit Review 24 July 2001 Beat Jost, Niko Neufeld Cern / EP.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
TFC Partitioning Support and Status Beat Jost Cern EP.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
Securing and Monitoring 10GbE WAN Links Steven Carter Center for Computational Sciences Oak Ridge National Laboratory.
LHCb Trigger and Data Acquisition System Beat Jost Cern / EP Presentation given at the 11th IEEE NPSS Real Time Conference June 14-18, 1999 Santa Fe, NM.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
1 DAQ System Realization DAQ Data Flow Review Sep th, 2001 Niko Neufeld CERN, EP.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Introduction to Network Processors Readout Unit Review 24 July 2001 Beat Jost Cern / EP.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
DAQ Summary Beat Jost Cern EP. Beat Jost, Cern 2 Agenda (Reminder)
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
X SuperB Workshop - SLAC Oct 06 to Oct 09, 2009 A.Cotta Ramusino, INFN Ferrara 1 SuperB IFR: outline of the IFR prototype electronics A.C.R
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb and InfiniBand on FPGA
High Rate Event Building with Gigabit Ethernet
Electronics Trigger and DAQ CERN meeting summary.
LHC experiments Requirements and Concepts ALICE
TELL1 A common data acquisition board for LHCb
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
ProtoDUNE SP DAQ assumptions, interfaces & constraints
The LHCb Event Building Strategy
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
LHCb Online Meeting November 15th, 2000
TELL1 A common data acquisition board for LHCb
Presentation transcript:

1 Network Processor based RU Implementation, Applicability, Summary Readout Unit Review 24 July 2001 Beat Jost, Niko Neufeld Cern / EP

Beat Jost, Cern 2 Outline qBoard-Level Integration of NP qApplicability in LHCb ãData-Acquisition åExample: Small-scale Lab Setup ã Level-1 Trigger qHardware Design, Production and Cost qEstimated Scale of the Systems qSummary of Features of a Software Driven RU qSummaries qConclusions

Beat Jost, Cern 3 Board-Level Integration q9Ux400 mm single width VME- like board (compatible with LHCb standard boards) q1 or 2 Mezzanine Cards containing each ã1 Network Processor ãAll memory needed for the NP ãConnections to the external world åPCI-bus åDASL (switch bus) åConnections to physical network layer åJTAG, Power and clock qPHY-connectors qTrigger-Throttle output qPower and Clock generation qLHCb standard ECS interface (CC-PC) with separate Ethernet connection Architecture

Beat Jost, Cern 4 Mezzanine Cards Benefits: Most complex parts confined Much fewer I/O pins (~300 compared to >1000 of the NP) Modularity of overall board Board layout deeply inspired by design of IBM reference kit Characteristics: ~14 layer board Constraints concerning impedances/trace lengths have to be met

Beat Jost, Cern 5 Features of the NP-based Module qThe module outlined is completely generic, i.e. there is no a-priori bias towards an application. qThe software running on the NP determines the function performed qArchitecturally it consists just of 8, fully connected, Gb Ethernet ports qUsing GbEthernet implies ãBias towards usage of Gb Ethernet in the Readout network ãConsequently needs Gb Ethernet-based S-Link interface for L1 electronics (being worked-on in Atlas) ãNo need for NICs in Readout Unit (availability/form-factor) qGb Ethernet allows to connect at any point in the data-flow a few PCs with GbE interfaces to debug/test

Beat Jost, Cern 6 Applicability in LHCb Applications in LHCb can be ãDAQ åFront-End Multiplexing (FEM) åReadout Unit åBuilding Block for switching network åFinal Event-Building Element before SFC ãLevel-1 Trigger åReadout Unit åFinal Event-Building stage for Level-1 trigger åSFC functionality for Level-1 åBuilding block for event- building network (see later)

Beat Jost, Cern 7 DAQ - FEM/RU Application qFEM and RU applications are equivalent qThe NP-Module allows for any multiplexing N:M with N + M  8 (no de-multiplexing!), e.g. ãN:1 data merging ãTwo times 3:1 if rate/data volumes increase or to save modules (subject to partitioning of course) qPerformance good enough for envisaged trigger rates (  100 kHz) and any multiplexing configuration (Niko’s presentation)

Beat Jost, Cern 8 DAQ - Event-Building Network qNP-Module is intrinsically an 8-port switch. qCan build any sized network with 8- port switching element, e.g. ãBrute-force Banyan topology, e.g. 128x128 switching network using port modules ãMore elaborate topology, taking into account special traffic pattern (~unidirectional), e.g. 112x128 port topology using 96 8-port modules Benefits: ãFull control over and knowledge of switching process (Jumbo Frames) ãFull control over flow-control ãFull Monitoring capabilities (CC-PC/ECS)

Beat Jost, Cern 9 Event-Building Network - Basic Structure 8-port Module

Beat Jost, Cern 10 DAQ - Final Event-Building Stage (I) qUp to now the baseline is to use “smart NICs” inside the SFCs to do the final event-building. ãOff-load SFC CPUs from handling individual fragments ãNo fundamental problem (performance sufficient) ãQuestion is future directions and availability. åMarket is going more towards ASICs implementing TCP/IP directly in hardware. åFreely programmable devices more geared for TCP/IP (small buffers) qNP-based Module could be a replacement ã4:4 Multiplexer/Data Merger Only a question of the software loaded Actually the software written so far doesn’t know about ports in the module

Beat Jost, Cern 11 Final Event-Building Stage (II) qSame generic hardware module q~Same software if separate layer in the dataflow qSFCs act ‘only’ as big buffers and for elaborated load balancing among the CPUs of a sub-farm Readout Network NP-based Event-Builder SFCs with ‘normal’ Gb EthernetNICs CPU (sub-)Farm(s) NP

Beat Jost, Cern 12 Example of small-scale Lab Setup Centrally provided: ãCode Running on NP to do event-building ãBasic framework for filter nodes ãBasic tools for recording ãConfiguration/Control/ Monitoring through ECS NP-Based RU Subdetector L1 Electronics Boards Standard PC (Filtering) GbE I/F Standard PC (Filtering) GbE I/F Standard PC (Recording) GbE I/F

Beat Jost, Cern 13 Level-1 Trigger Application (Proposal) Basically exactly the same as for the DAQ ãProblem is structurally the same, but different environment (1.1 MHz Trigger rate and small fragments) ãSame basic architecture ãNP-RU module run in 2x3:1 mode ãNP-RU module for final event-building (as in DAQ) and implementing SFC functionality (load- balancing, buffering) Performance sufficient! (see Niko’s presentation)

Beat Jost, Cern 14 Design and Production qDesign ãIn principle a ‘reference design’ should be available from IBM ãBased on this the Mezzanine cards could be designed ãThe mother-board would be a separate effort ãDesign effort will need to be found åinside Cern (nominally “cheap”) åCommercial (less cheap) ãBefore prototypes are made, design review with IBM engineers and extensive simulation performed qProduction ãMass production clearly commercial (external to Cern) ãBasic tests (visual inspection, short/connection tests) by manufacturer ãFunctional testing by manufacturer with tools provided by Cern (LHCb) ãAcceptance tests by LHCb

Beat Jost, Cern 15 Cost (very much estimated) qMezzanine Board ãTentative offer of 3 k$/card (100 cards), probably lower for more cards. -> 6 k$/RU ãCost basically driven by cost of NP (goes down as NP price goes down) å~1400 $ today, single quantities å~1000 $ in 2002 for pieces å~500 $ in 2002 for pieces å2003???? qCarrier Board ãCC-PC:~150 $ ãPower/Clock generation:??? (but cannot be very expensive?) ãNetwork PHYs (GbE Optical small form-factor): 8x90$ ãOverall:~2000 $? qTotal: <~8000$ (100 Modules, very much depending on volume) qAtlas has shown some interest in using the NP4GS3 and also in our board architecture, in particular the Mezzanine card (volume!)

Beat Jost, Cern 16 Number of NP-based Modules Notes: For FEM and RU purposes it is more cost effective to use the NP-based RU module in a 3:1 multiplexing mode. This reduces the number of physical boards by factor ~1/3 For Level-1 the number is determined by the speed of the output link. A reduction in the fragment header can lead to a substantial saving. Details to be studied.

Beat Jost, Cern 17 Summary of Features of a Software-Driven RU qMain positive feature is the offered flexibility to new situations ãChanges in running conditions ãTraffic shaping strategies ãChanges in destination assignment strategies ãEtc… qbut also elaborate possibilities of diagnostic and debugging ãCan put debug code to catch intermittent problems ãCan send debug information via the embedded PPC to the ECS ãCan debug the code or malfunctioning partners in-situ

Beat Jost, Cern 18 Summary (I) - General qNP-based RU fulfils the requirement in speed and functionality qThere is not yet a detailed design of the final hardware available, however a functionally equivalent reference kit from IBM has been used to prove the functionality and performance.

Beat Jost, Cern 19 Summary (II) - Features qSimulations show that performance is largely sufficient for all applications qMeasurements confirm accuracy of simulation results qSupported features: ãAny network-based (Ethernet) readout protocol is supported (just software!) ãFor all practical purposes wire-speed event-building rates can be achieved. ãTo cope with network congestion 64 MB of output buffer available ãError detection and reporting, flow control å32-bit CRC per frame åHardware support for CRC over any area of a frame (e.g. over transport header). Software defined. åEmbedded PPC + CC-PC allow for efficient monitoring and exception handling/recovery/diagnostics åBreak-points and single stepping via the CC-PC for remote in-situ debugging of problems ãAt any point in the dataflow standard PCs can be attached for diagnostic purposes

Beat Jost, Cern 20 Summary (III) - Planning qPotential future work programme ãHardware: It’s-a-depends-a… (external design: ~300 k$ design+production tools) ã~1 m  y of effort for infrastructure software on CC-PC etc. (test/diagnostic software, configuration, monitoring, etc.) ãOnline team will be responsible for deployment, commissioning and operation, including Picocode on NP. qPlanning for module production, testing, commissioning (depends on LHC schedule)

Beat Jost, Cern 21 Summary (IV) – Environment and Cost qBoard: aim for single width 9Ux400 mm VME, power requirement: ~60 W, forced cooling required. qProduction Cost ãStrongly dependant on component cost (later purchase  lower price) ãIn today’s prices (100 Modules): åMezzanine card: 3000 $/card (NB: NP enters with 1400$) åCarrier card : ~2000 $ (fully equipped with PHYs, perhaps pluggable?) åTotal: ~8000 $/RU (~5000 $ if only one mezzanine card mounted)

Beat Jost, Cern 22 Conclusion qNPs are a very promising technology even for our applications qPerformance is sufficient for all applications and software flexibility allows for new applications, e.g. implementing the readout network and the final event-building stage. qCost is currently high, but not prohibitive and is expected to drop significantly with new generations of NPs (supporting 10 Gb Ethernet) entering the scene. qStrong points are (software) flexibility, extensive support for diagnostics and wide range of possible applications  One and only one module type for all applications in LHCb

Beat Jost, Cern 23 Read-out Network (RN) RU 6-15 GB/s 50 MB/s Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring LAN Read-out units (RU) Timing & Fast Control Level-0 Front-End Electronics Level-1 VELO TRACK ECAL HCAL MUON RICH LHCb Detector L0 L1 Level 0 Trigger Level 1 Trigger 40 MHz 1 MHz kHz Fixed latency 4.0  s Variable latency <1 ms Data rates 40 TB/s 1 TB/s 1 MHz Front End Links Trigger Level 2 & 3 Event Filter SFC CPU Sub-Farm Controllers (SFC) Storage Throttle Front-EndMultiplexers(FEM)

Beat Jost, Cern 24

Beat Jost, Cern 25 Readout Network NP-based Event-Builder SFCs with ‘normal’ Gb EthernetNICs CPU (sub-)Farm(s)

Beat Jost, Cern 26 Event Builder Event Builder InputOutput RU/FEM Application Event Builder Event Builder InputOutput EB Application