Department of Particle Physics & Astrophysics ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, November 19, 2008.

Slides:



Advertisements
Similar presentations
Goals and status of the ATLAS VME Replacement Working Group Markus Joos, CERN Based on slides prepared by Guy Perrot, LAPP 19/09/2012.
Advertisements

JLab High Resolution TDC Hall D Electronics Review (7/03) - Ed Jastrzembski.
26-Sep-11 1 New xTCA Developments at SLAC CERN xTCA for Physics Interest Group Sept 26, 2011 Ray Larsen SLAC National Accelerator Laboratory New xTCA Developments.
RCE Platform Technology (RPT)
CMS Week Sept 2002 HCAL Data Concentrator Status Report for RUWG and Calibration WG Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
S. Silverstein For ATLAS TDAQ Level-1 Trigger updates for Phase 1.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
1 Design of the Front End Readout Board for TORCH Detector 10, June 2010.
Development of an ATCA IPMI Controller Mezzanine Board to be used in the ATCA developments for the ATLAS Liquid Argon upgrade Nicolas Dumont Dayot, LAPP.
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
The Train Builder Data Acquisition System for the European-XFEL John Coughlan, Chris Day, Senerath Galagedera and Rob Halsall STFC Rutherford Appleton.
 Chasis / System cabinet  A plastic enclosure that contains most of the components of a computer (usually excluding the display, keyboard and mouse)
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
Gunther Haller SiD LOI Meeting March 2, LOI Content: Electronics and DAQ Gunther Haller Research Engineering Group.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
Update of IBL ROD proposal based on RCE/CIM concept and ATCA platform Rainer Bartoldus, Andy Haas, Mike Huffer, Martin Kocian, Su Dong, Emanuel Strauss,
Update on DAQ Upgrade R&D with RCE/CIM and ATCA platform Rainer Bartoldus, Martin Kocian, Andy Haas, Mike Huffer, Su Dong, Emanuel Strauss, Matthias Wittgen.
Department of Particle Physics & Astrophysics Representing: Rainer Bartoldus Ric Claus Gunther Haller Ryan Herbst Martin Kocian Chris O’Grady Jim Panetta.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Preliminary Design Review: Hub Implementation Dan Edmunds, Wade Fisher, Yuri Ermoline, Philippe Laurens Michigan State University 01-Oct-2014.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
K.C.RAVINDRAN,GRAPES-3 EXPERIMENT,OOTY 1 Development of fast electronics for the GRAPES-3 experiment at Ooty K.C. RAVINDRAN On Behalf of GRAPES-3 Collaboration.
Department of Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
SLAC Particle Physics & Astrophysics Future Development and Direction RCE Training Workshop Michael Huffer, 15 June, 2009.
Agata Week – LNL 14 November 2007 Global Readout System for the AGATA experiment M. Bellato a a INFN Sez. di Padova, Padova, Italy.
Presentation for the Exception PCB February 25th 2009 John Coughlan Ready in 2013 European X-Ray Free Electron Laser XFEL DESY Laboratory, Hamburg Next.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Multistep Runs with ROD Crate DAQ Murrough Landon, QMUL Outline: Overview Implementation Comparison with existing setup Readout Status ModuleServices API.
Upgrade to the Read-Out Driver for ATLAS Silicon Detectors Atlas Wisconsin/LBNL Group John Joseph March 21 st 2007 ATLAS Pixel B-Layer Upgrade Workshop.
Serial Data Link on Advanced TCA Back Plane M. Nomachi and S. Ajimura Osaka University, Japan CAMAC – FASTBUS – VME / Compact PCI What ’ s next?
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
Gunther Haller SiD Meeting 26 October Electronics Systems Issues for SiD Dataflow & Power Gunther Haller Research Engineering.
Gunther Haller SiD Meeting January 30, 08 1 Electronics Systems Discussion presented by Gunther Haller Research Engineering.
A Survey on Interlaken Protocol for Network Applications Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan,
Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Trigger Hardware Development Modular Trigger Processing Architecture Matt Stettler, Magnus Hansen CERN Costas Foudas, Greg Iles, John Jones Imperial College.
Department of Particle Physics & Astrophysics CSC ROD Replacement Conceptual Design Review New ROD Complex (NRC) Overview & Physical Design Michael Huffer,
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
ATLAS DCS ELMB PRR, March 4th 2002, H.J.Burckhart1 Embedded Local Monitor Board ELMB  Context  Aim  Requirements  Add-ons  Our aims of PRR.
Department of Particle & Particle Astrophysics Modular Data Acquisition Introduction and applicability to LCLS DAQ Michael Huffer,
IPMI developments at LAPP dec 15, 2011IPMI developments at LAPP, L.Fournier1 Alain BAZAN, Fatih BELLACHIA,Sébastien CAP, Nicolas DUMONT DAYOT, Laurent.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Consideration of the LAr LDPS for the MM Trigger Processor Kenneth Johns University of Arizona Block diagrams and some slides are edited from those of.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Possible replacement of VME Philippe Farthouat, CERN.
Johannes Lang: IPMI Controller Johannes Lang, Ming Liu, Zhen’An Liu, Qiang Wang, Hao Xu, Wolfgang Kuehn JLU Giessen & IHEP.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
ATLAS calorimeter and topological trigger upgrades for Phase 1
Electronics Trigger and DAQ CERN meeting summary.
L1Calo Phase-1 architechure
The Train Builder Data Acquisition System for the European-XFEL
MicroTCA Common Platform For CMS Working Group
Characteristics of Reconfigurable Hardware
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Presentation transcript:

Department of Particle Physics & Astrophysics ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, November 19, 2008

Department of Particle Physics & Astrophysics 2 Outline DAQ support for next generation HEP experiments… –“survey the requirements and capture their commonality” One size does not fit all… –generic building blocks the (Reconfigurable) Cluster Element (RCE) the Cluster Interconnect (CI) –industry standard packaging ATCA Packaged solutions –the RCE board –the CI board Applicability to ATLAS (the proposal): –motivation –scope –details ROM (Read-Out-Module) CIM (Cluster-Interconnect-Module) ROC (Read-Out-Crate) –physical footprint, scaling & performance Summary

Department of Particle Physics & Astrophysics 3 Three building block concepts Computational elements –must be low-cost $$$ footprint power –must support a variety of computational models –must have both flexible and performanent I/O Mechanism to connect together these elements –must be low-cost –must provide low-latency/high-bandwidth I/O –must be based on a commodity (industry) protocol –must support a variety of interconnect topologies hierarchical peer-to-peer fan-In & fan-Out Packaging solution for both element & interconnect –must provide High Availability –must allow scaling –must support different physical I/O interfaces –preferably based on a commercial standard The Reconfigurable Cluster Element (RCE) –employs System-On-Chip technology (SOC) The Cluster Interconnect (CI) –based on 10-GE Ethernet switching ATCA –Advanced Telecommunication Computing Architecture –crate based, serial backplane

Department of Particle Physics & Astrophysics 4 (Reconfigurable) Cluster Element (RCE) Bundled software: –bootstrap loader –Open Source kernel (RTEMS) POSIX compliant interfaces standard I/P network stack –exception handling support MGTs Core DSP tiles Combinatoric logic Resources Processor 450 MHZ PPC MByte RLD-II Boot Options Memory Subsystem Configuration data 128 MByte Flash Data Exchange Interface (DEI) instruction reset & bootstrap options Class libraries (C++) provide: –DEI support –configuration interface Bundled software: –GNU cross- development environment (C & C++) –remote (network) GDB debugger –network console

Department of Particle Physics & Astrophysics 5 Resources Multi-Gigabit Transceivers (MGTs) –up to 24 channels of: SER/DES input/output buffering clock recovery 8b/10b encoder/decoder 64b/66b encoder/decoder –each channel can operate up to 6.5 gb/s –channels may be bound together for greater aggregate speed Combinatoric logic gates flip-flops (block RAM) I/O pins DSP support –contains up 192 Multiple-Accumulate-Add (MAC) units

Department of Particle Physics & Astrophysics 6 Derived configuration - Cluster Element (CE) Combinatoric logic MGTs Core 1.0/2/5/10.0 gb/s PGP Ethernet MAC MGTs Combinatoric logic E gb/s E1

Department of Particle Physics & Astrophysics 7 The Cluster Interconnect (CI) Based on two Fulcrum FM224s –24 port 10-GE switch –is an ASIC (packaging in 1433-ball BGA) –XAUI interface (supports multiple speeds including 100-BaseT, 1-GE & 2.5 gb/s) –less then 24 watts at full capacity –cut-through architecture (packet ingress/egress < 200 NS) –full Layer-2 functionality (VLAN, multiple spanning tree etc..) –configuration can be managed or unmanaged Management bus RCE 10-GE L2 switch Q0Q1 Q2Q3

Department of Particle Physics & Astrophysics 8 A cluster of 12 elements To back-end systems Cluster Interconnect Elements From Front-End systems switching fabric

Department of Particle Physics & Astrophysics 9 Why ATCA as a packaging standard? An emerging telecom standard… Its attractive features: –backplane & packaging available as a commercial solution –generous form factor 8U x 1.2” pitch –hot swap capability –well-defined environmental monitoring & control –emphasis on High Availability –external power input is low voltage DC allows for rack aggregation of power Its very attractive features: –the concept of a Rear Transition Module (RTM) allows all cabling to be on rear (module removal without interruption of cable plant) allows separation of data interface from the mechanism used to process that data –high speed serial backplane protocol agnostic provision for different interconnect topologies

Department of Particle Physics & Astrophysics 10 RCE board + RTM (Block diagram) MFD RCE flash memory RCE slice 0 slice 1 slice 2 slice 3 slice 7 slice 6 slice 5 slice 4 MFD Fiber-optic transceivers P2 PayloadRTM P3 E1 E0 base fabric

Department of Particle Physics & Astrophysics 11 RCE board + RTM Media Carrier with flash Media Slice controller RCE Zone 1 (power) Zone 2 Zone 3 transceiver s RTM

Department of Particle Physics & Astrophysics 12 Cluster Interconnect board + RTM (Block diagram) MFD CI Q2 P2 1-GE 10-GE XFP Q0 Q1Q3 XFP PayloadRTM P3 10-GE 1-GE 10-GE XFP 10-GE (fabric) (base) base fabric (fabric) (base)

Department of Particle Physics & Astrophysics 13 Cluster Interconnect board + RTM CI Zone 3 Zone 1 10 GE switch XFP 1G Ethernet RCE XFP RTM

Department of Particle Physics & Astrophysics 14 Typical (5 slot) ATCA crate fans CI RTM RCE RTM CI board Power supplies RCE board Shelf manager Front Back

Department of Particle Physics & Astrophysics 15 Motivation Start with the premise that ROD replacement is inevitable… –detector volume will scale upwards with luminosity –modularity of Front-End-Electronics will change Replacement is an opportunity to address additional concerns… –longevity of existing RODS long-term maintenance & support –many different ROD flavors difficult to capture commonality & reduce duplicated effort –ROS Ingress/Egress imbalance capable of almost 2 Gbytes/sec input capable of less than 400/800 Mbytes/sec output –scalability each added ROD requires (on average) adding one ROS/PC –one ROD (on average) drives two ROLs –one ROS/PC can process (roughly) 2 ROLs worth of data physical separation adds mechanical & operational constraints

Department of Particle Physics & Astrophysics 16 Scope & the Integrated Read-Out System (IROS) ROD crates –RODs –crate controller –L1 distribution & control –“back-plane” boards ROS/PC racks –ROS/PCs –ROBins ROLs (between ROS & ROD) “wires” connecting these components Proposal calls out for the replacement of the IROS… –Intrinsic modularity of the scheme allows replacing a subset Level-2 (L2) trigger Event Builder (EB) Integrated Read Out System (IROS) Detector Front-End Electronics Upstream & downstream systems would remain the same… Proposal is constructed out of three elements: –ROM (Read-Out-Module) combines functionality of ROD + ROS/PC –CIM (Cluster-Interconnect-Module) –ROC (Read-Out-Crate)

Department of Particle Physics & Astrophysics 17 Read-Out-Module (ROM) From detector FEE ROC backplane Cluster Elements P3 ROM Rear Transition Module P2 10-GE switch L1 fanout switch management (X4) 3.2 gb/s 10 gb/s (X2) 10 gb/s (X2) 2.5 gb/s CIM

Department of Particle Physics & Astrophysics 18 Read-Out-Crate (ROC) from L1 CIM Rear Transition Module 10-GE switch P3 Backplane Rear Transition Module switch management L1 fanout 10-GE switch Shelf Management 10-GE switch P3 ROMs CIM To monitoring & controlfrom L1 To L2 & Event Building switch management L1 fanout (X12) 10 gb/s (X2) 10 gb/s (X2) 2.5 gb/s

Department of Particle Physics & Astrophysics 19 IROS USA15 UX15 SDX1 IROS Switching fabric L1 trigger Event Builder farm L2 farm (X384) 3.2 gb/s (X24) 10 gb/s TileCalLArPixelSCTTRTMDTCSC Inner TrackerMuonCalorimeter CalRPCTGCTPICTP

Department of Particle Physics & Astrophysics 20 IROS plant footprint Detector System Detector Subsystem Total RODs Total ROLs ROD crates The ROS Total ROMs Total crates Total CIMs ROBInsPCs Calorimeter LAr TileCal Inner Detector Pixel SCT TRT Muon MDT CSC L1 Calorimeter Muon RPC Muon TGC MUCTPI CTP Totals current proposed

Department of Particle Physics & Astrophysics 21 Scaling & performance Phase Input event rates (KHz)Data size (Mbytes) Output rates (Gbytes/s) IROSL2EBEventROIL2EB Proposal scales linearly with number of ROMs… –2 Gbytes/sec/ROM for L2 network –.5 Gbytes/sec/ROM for Event Building network For plug replacement example this implies an output capacity of… –270 Gbytes/sec for L2 –118 Gbytes/sec for Event Building As a comparison current system has a total output capacity of… –116 Gbytes/sec (8 NIC channels) Performance requirements as a function of luminosity upgrade phase –numbers derived from Andy’s upgrade talk (TDAQ week, May 2008) –both ROI size & number change as a function of luminosity

Department of Particle Physics & Astrophysics 22 Summary SLAC is positioning itself for a new generation of DAQ… –strategy is based on the idea of modular building blocks inexpensive computational elements (the RCE) interconnect mechanism (the CI) industry standard packaging (ATCA) –architecture is now relatively mature both demo boards (& corresponding RTMs) are functional RTEMS ported & operating network stack fully tested and functional –performance and scaling meet expectations –costs have been established (engineering scales): ~$1K/RCE (goal is less then $750) ~$1K/CI (goal is less then $750) This is an outside view looking in (presumptuous + sometimes useful) –Initiate discussion on the bigger picture –Separate proposal abstraction from its implementation common substrate ROD integration of ROD + ROL + ROBin functionality Inherent modularity of this scheme allows piece-meal (adiabatic) replacement –can co-exist with current system Leverage recent industry innovation –System-On-chip (SOC) –High speed serial transmission –Low cost, small footprint, high-speed switching (10-GE) –Packaging standardization (serial backplanes and RTM)