Department of Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.

Slides:



Advertisements
Similar presentations
26-Sep-11 1 New xTCA Developments at SLAC CERN xTCA for Physics Interest Group Sept 26, 2011 Ray Larsen SLAC National Accelerator Laboratory New xTCA Developments.
Advertisements

3U ATCA Shelf October Proprietary and Confidential-
ESODAC Study for a new ESO Detector Array Controller.
RCE Platform Technology (RPT)
CMS Week Sept 2002 HCAL Data Concentrator Status Report for RUWG and Calibration WG Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
Level-1 Topology Processor for Phase 0/1 - Hardware Studies and Plans - Uli Schäfer Johannes Gutenberg-Universität Mainz Uli Schäfer 1.
Some Thoughts on Technology and Strategies for Petaflops.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
Detector Array Controller Based on First Light First Light PICNIC Array Mux PICNIC Array Mux Image of ESO Messenger Front Page M.Meyer June 05 NGC High.
1 Design of the Front End Readout Board for TORCH Detector 10, June 2010.
Development of an ATCA IPMI Controller Mezzanine Board to be used in the ATCA developments for the ATLAS Liquid Argon upgrade Nicolas Dumont Dayot, LAPP.
Router Architectures An overview of router architectures.
Router Architectures An overview of router architectures.
The Train Builder Data Acquisition System for the European-XFEL John Coughlan, Chris Day, Senerath Galagedera and Rob Halsall STFC Rutherford Appleton.
HomeGate Backplane Solutions Brian Von Herzen, Ph.D. Xilinx Consultant June 21, 2004 ISO/IEC JTC1 SC25/WG1 N June 2004 (Von Herzen,
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
© Copyright Xilinx 2004 All Rights Reserved 9 November, 2004 XUP Virtex-II Pro Development System.
NetBurner MOD 5282 Network Development Kit MCF 5282 Integrated ColdFire 32 bit Microcontoller 2 DB-9 connectors for serial I/O supports: RS-232, RS-485,
SLAAC Hardware Status Brian Schott Provo, UT September 1999.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
Gunther Haller SiD LOI Meeting March 2, LOI Content: Electronics and DAQ Gunther Haller Research Engineering Group.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
Update of IBL ROD proposal based on RCE/CIM concept and ATCA platform Rainer Bartoldus, Andy Haas, Mike Huffer, Martin Kocian, Su Dong, Emanuel Strauss,
Update on DAQ Upgrade R&D with RCE/CIM and ATCA platform Rainer Bartoldus, Martin Kocian, Andy Haas, Mike Huffer, Su Dong, Emanuel Strauss, Matthias Wittgen.
Department of Particle Physics & Astrophysics Representing: Rainer Bartoldus Ric Claus Gunther Haller Ryan Herbst Martin Kocian Chris O’Grady Jim Panetta.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Preliminary Design Review: Hub Implementation Dan Edmunds, Wade Fisher, Yuri Ermoline, Philippe Laurens Michigan State University 01-Oct-2014.
Pulsar II Hardware Overview Jamieson Olsen, Fermilab 14 April 2014
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Department of Particle Physics & Astrophysics Introduction to the RPT & its concepts. Roadmap to today’s lectures and lessons. CERN follow-on to the Oxford.
SLAC Particle Physics & Astrophysics Future Development and Direction RCE Training Workshop Michael Huffer, 15 June, 2009.
Embedded Runtime Reconfigurable Nodes for wireless sensor networks applications Chris Morales Kaz Onishi 1.
Presentation for the Exception PCB February 25th 2009 John Coughlan Ready in 2013 European X-Ray Free Electron Laser XFEL DESY Laboratory, Hamburg Next.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
By V. Koutsoumpos, C. Kachris, K. Manolopoulos, A. Belias NESTOR Institute – ICS FORTH Presented by: Kostas Manolopoulos.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
Department of Particle Physics & Astrophysics ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, November 19, 2008.
MAPLD 2005/254C. Papachristou 1 Reconfigurable and Evolvable Hardware Fabric Chris Papachristou, Frank Wolff Robert Ewing Electrical Engineering & Computer.
Slide ‹Nr.› l © 2015 CommAgility & N.A.T. GmbH l All trademarks and logos are property of their respective holders CommAgility and N.A.T. CERN/HPC workshop.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Communication in ATCA-LLRF System LLRF Review, DESY, December 3rd, 2007 Communication in.
M. Joos, PH/ESE1 ATCA for ATLAS - status Activities of the working group PH/ESE xTCA evaluation project Next steps.
Serial Data Link on Advanced TCA Back Plane M. Nomachi and S. Ajimura Osaka University, Japan CAMAC – FASTBUS – VME / Compact PCI What ’ s next?
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
Gunther Haller SiD Meeting 26 October Electronics Systems Issues for SiD Dataflow & Power Gunther Haller Research Engineering.
Gunther Haller SiD Meeting January 30, 08 1 Electronics Systems Discussion presented by Gunther Haller Research Engineering.
A Survey on Interlaken Protocol for Network Applications Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan,
Stefan StancuCHEP06 Mumbai, India1 Networks for ATLAS Trigger and Data Acquisition S. Stancu *†, C. Meirosu †‡, M. Ciobotaru *†, L. Leahu †‡, B. Martin.
MASCON: A Single IC Solution to ATM Multi-Channel Switching With Embedded Multicasting Ali Mohammad Zareh Bidoki April 2002.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Trigger Hardware Development Modular Trigger Processing Architecture Matt Stettler, Magnus Hansen CERN Costas Foudas, Greg Iles, John Jones Imperial College.
Department of Particle Physics & Astrophysics CSC ROD Replacement Conceptual Design Review New ROD Complex (NRC) Overview & Physical Design Michael Huffer,
Department of Particle & Particle Astrophysics Modular Data Acquisition Introduction and applicability to LCLS DAQ Michael Huffer,
RCE Project Update and Plans Matt Weaver, July 14, 2015 SLAC Technology Innovation Directorate Advanced Data Systems Department.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Consideration of the LAr LDPS for the MM Trigger Processor Kenneth Johns University of Arizona Block diagrams and some slides are edited from those of.
High Bandwidth DAQ Systems for the Intensity Frontier Mark Convery January 10, 2013.
Possible replacement of VME Philippe Farthouat, CERN.
Johannes Lang: IPMI Controller Johannes Lang, Ming Liu, Zhen’An Liu, Qiang Wang, Hao Xu, Wolfgang Kuehn JLU Giessen & IHEP.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- ACES ACES 2011 MicroTCA in CMS E. Hazen, Boston University for the CMS collaboration.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
E. Hazen -- HCAL Upgrade Workshop1 MicroTCA Common Platform For CMS Working Group E. Hazen - Boston University for the CMS Collaboration.
Chapter 8 Networking Computers with Switches
AMC13 Project Status E. Hazen - Boston University
Future Hardware Development for discussion with JLU Giessen
The Train Builder Data Acquisition System for the European-XFEL
MicroTCA Common Platform For CMS Working Group
Presentation transcript:

Department of Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric Siskind Matt Weaver CERN/ACES Workshop A new proposal for the construction of high speed, massively parallel, ATCA based Data Acquisition Systems Michael Huffer, Stanford Linear Accelerator Center March, 3-4, 2009

Department of Particle Physics & Astrophysics 2 Outline DAQ/trigger “technology” for next generation HEP experiments… –Result of ongoing SLAC R & D project “Survey the requirements and capture their commonality” –Intended to leverage recent industry innovation –Technology, “one size does not fit all” (Ubiquitous building blocks) The (Reconfigurable) Cluster Element (RCE) The Cluster Interconnect (CI) Industry standard packaging (ATCA) –Technology evaluation & demonstration “hardware” The RCE & CI boards Use this technology to explore alternate (ATLAS) TDAQ architectures For example (one such case study)… –Common (subsystem independent) ROD platform –Combine abstract functionality of both ROD + ROS –Provide both horizontal & vertical connectivity (peer-to-peer) –Hardware architecture for such a scheme has three elements… ROM (Read-Out-Module) CIM (Cluster-Interconnect-Module) ROC (Read-Out-Crate)

Department of Particle Physics & Astrophysics 3 Three building block concepts Computational elements –must be low-cost $$$ footprint power –must support a variety of computational models –must have both flexible and performanent I/O Mechanism to connect together these elements –must be low-cost –must provide low-latency/high-bandwidth I/O –must be based on a commodity (industry) protocol –must support a variety of interconnect topologies hierarchical peer-to-peer fan-In & fan-Out Packaging solution for both element & interconnect –must provide High Availability –must allow scaling –must support different physical I/O interfaces –preferably based on a commercial standard The Reconfigurable Cluster Element (RCE) based on: –System-On-Chip technology (SOC) –Virtex-4 & 5 The Cluster Interconnect (CI) –based on 10-GE Ethernet switching ATCA –Advanced Telecommunication Computing Architecture –crate based, serial backplane

Department of Particle Physics & Astrophysics 4 (Reconfigurable) Cluster Element (RCE) MGTs Configuration 128 MByte Flash Memory Subsystem Core Resources DX Ports DSP tiles Processor 450 MHZ PPC-405 Cross-Bar 512 MByte RLDRAM-II DSP tiles Combinatoric Logic Combinatoric Logic Combinatoric Logic Combinatoric Logic DSP tiles MGTs Boot Options reset & bootstrap options

Department of Particle Physics & Astrophysics 5 Software & development Cross-development… –GNU cross-development environment (C & C++) –remote (network) GDB debugger –network console Operating system support… –Bootstrap loader –Open Source Real-Time kernel (RTEMS) POSIX compliant interfaces Standard I/P network stack –Exception handling support Object-Oriented emphasis: –Class libraries (C++) DEI support Configuration Interface

Department of Particle Physics & Astrophysics 6 Resources Multi-Gigabit Transceivers (MGTs) –up to 12 channels of: SER/DES input/output buffering clock recovery 8b/10b encoder/decoder 64b/66b encoder/decoder –each channel can operate up to 6.5 gb/s –channels may be bound together for greater aggregate speed Combinatoric logic gates flip-flops (block RAM) I/O pins DSP support –contains up 192 Multiple-Accumulate-Add (MAC) units

Department of Particle Physics & Astrophysics 7 DX “Plug-Ins” Configuration 128 MByte Flash Memory Subsystem Processor 450 MHZ PPC-405 Cross-Bar 512 MByte RLDRAM-II Boot Options PGP plug-in (3.125 gb/s) Ethernet MAC plug-in XUAI (4 x gb/s)

Department of Particle Physics & Astrophysics 8 The Cluster Interconnect (CI) Based on two Fulcrum FM224s –24 port 10-GE switch –is an ASIC (packaging in 1433-ball BGA) –10-GE XAUI interface, however, supports multiple speeds… –100-BaseT, 1-GE & 2.5 gb/s –less then 24 watts at full capacity –cut-through architecture (packet ingress/egress < 200 ns) –full Layer-2 functionality (VLAN, multiple spanning tree etc..) –configuration can be managed or unmanaged Management bus RCE 10-GE L2 switch Q0Q1 Q2Q3 Even sideOdd side

Department of Particle Physics & Astrophysics 9 A cluster of 12 elements To back-end systems Cluster Interconnect Cluster Elements From Front-End systems switching fabric Q0 Q2 Q1 Q3 Two 12 node (element) networks Each element: – manages 4 inputs – connects to 2 networks Each CI network – switches 12 elements – has symmetric output

Department of Particle Physics & Astrophysics 10 Why ATCA as a packaging standard? An emerging telecom standard… –see previous talk given at ATLAS Upgrade week Its attractive features: –backplane & packaging available as a commercial solution –(relatively) generous form factor 8U x 1.2” pitch –emphasis on High Availability lots of redundancy hot swap capability well-defined environmental monitoring & control (IPM) –pervasive industry use –external power input is low voltage DC allows for rack aggregation of power Its very attractive features (as substrate for RCE & CI): –the concept of a Rear Transition Module (RTM) allows all cabling on rear (module removal without interruption of cable plant) allows separation of data interface from the mechanism used to process that data –high speed serial backplane protocol agnostic provision for different interconnect t opologies

Department of Particle Physics & Astrophysics 11 Typical (5 slot) shelf fans RTM front board Power supplies front board Shelf manager Front Back

Department of Particle Physics & Astrophysics 12 RCE board + RTM (Block diagram) MFD RCE flash memory RCE slice 0 slice 1 slice 2 slice 3 slice 7 slice 6 slice 5 slice 4 MFD Fiber-optic transceivers P2 PayloadRTM P3 E1 E0 base fabric

Department of Particle Physics & Astrophysics 13 RCE board + RTM Media Carrier with flash Media Slice controller RCE Zone 1 (power) Zone 2 Zone 3 transceiver s RTM

Department of Particle Physics & Astrophysics 14 Cluster Interconnect board + RTM (Block diagram) MFD CI Q2 P2 1-GE 10-GE XFP Q0 Q1Q3 XFP PayloadRTM P3 10-GE 1-GE 10-GE XFP 10-GE (fabric) (base) base fabric (fabric) (base)

Department of Particle Physics & Astrophysics 15 Cluster Interconnect board + RTM CI Zone 3 Zone 1 10 GE switch XFP 1G Ethernet RCE XFP RTM

Department of Particle Physics & Astrophysics 16 “Tight” coupling (48-channel) ROM (Read-Out-Module) ROC backplane ROM P2 10-GE switch TTC fanout switch management (X3) gb/s 10-GE (X2) 10-GE (X2) 10-GE CIM From detector FEE Rear Transition Module xmtrcvxmtrcvxmtrcv P3 SNAP-12 (X12) gb/s Cluster Elements (x16) GBT “plug-in” Ethernet MAC “plug-in”

Department of Particle Physics & Astrophysics 17 “Loose” coupling (24-channel) ROM ROC backplane ROM P2 10-GE switch TTC fanout switch management (X3) gb/s 10-GE (X2) 10-GE (X2) 10-GE CIM P3 Cluster Elements (x16) private interconnect “plug-in” gb/s

Department of Particle Physics & Astrophysics 18 “Tight” versus “Loose” coupling Commonalities: –Two physically, disjoint networks (1 for subsystem, 1 for TDAQ) –Board output is 4 GBytes/sec (2 for subsystem, 2 for TDAQ) (at some cost) could be doubled with 10 gb/s backplane –Same GBT & Ethernet MAC “plug-ins” –ROD function manages 3 channels/element Differences: –“Tight” One element shares both ROD & ROS functionality Coupling is implemented through a software interface One element connects to both (Subsystem & TDAQ) networks –“Loose” One element for ROD functions & one element for ROS functions Coupling is implemented through a hardware interface One element connects to one (Subsystem or TDAQ) network While attractive loose “costs” twice as much as tight… –$$$ –footprint –power

Department of Particle Physics & Astrophysics 19 Read-Out-Crate (ROC) from L1 CIM Rear Transition Module 10-GE switch P3 Backplane Rear Transition Module switch management L1 fanout 10-GE switch Shelf Management 10-GE switch P3 ROMs CIM To monitoring & controlfrom L1 To L2 & Event Building switch management L1 fanout (X12) 10-GE (X2) 10-GE (X2) 10-GE

Department of Particle Physics & Astrophysics 20 Summary SLAC is positioning itself for a new generation of DAQ… –strategy is based on the idea of modular building blocks inexpensive computational elements (the RCE) interconnect mechanism (the CI) industry standard packaging (ATCA) –architecture is now relatively mature both demo boards (& corresponding RTMs) are functional RTEMS ported & operating network stack fully tested and functional –performance and scaling meet expectations –costs have been established (engineering scales): ~$1K/RCE (goal is less then $750) ~$1K/CI (goal is less then $750) –documentation is a “work-in-progress” –public release is pending Stanford University “licensing issues” This technology strongly leverages off recent industry innovation, including: –System-On-Chip (SOC) –High speed serial transmission –low cost, small footprint, high-speed switching (10-GE) –packaging standardization (serial backplanes and RTM) Gained experience with these innovations will itself be valuable… This technology offers a ready-today vehicle to explore both alternate architectures & different performance regimes Thanks for the many valuable discussions: –Benedetto Gorini, Andreas Kugel, Louis Tremblet, Jos Vermeulen, Mimmo della Volpe (ROS) –Fred Wickens (RAL) –Bob Blair, Jinlong Zhang (Argonne) –Rainer Bartoldus & Su Dong (SLAC)