Department of Particle Physics & Astrophysics CSC ROD Replacement Conceptual Design Review New ROD Complex (NRC) Overview & Physical Design Michael Huffer,

Slides:



Advertisements
Similar presentations
FatMax Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 LicenseCreative Commons Attribution-NonCommercial-ShareAlike 2.5.
Advertisements

Goals and status of the ATLAS VME Replacement Working Group Markus Joos, CERN Based on slides prepared by Guy Perrot, LAPP 19/09/2012.
Ancillary firmware for the NSW Trigger Processor Lorne Levinson, Weizmann Institute for the NSW Trigger Processor Working Group NSW Electronics Design.
Digital RF Stabilization System Based on MicroTCA Technology - Libera LLRF Robert Černe May 2010, RT10, Lisboa
26-Sep-11 1 New xTCA Developments at SLAC CERN xTCA for Physics Interest Group Sept 26, 2011 Ray Larsen SLAC National Accelerator Laboratory New xTCA Developments.
Page 1 Dorado 400 Series Server Club Page 2 First member of the Dorado family based on the Next Generation architecture Employs Intel 64 Xeon Dual.
3U ATCA Shelf October Proprietary and Confidential-
Asis AdvancedTCA Class. What is PICMG? PICMG - The PCI Industrial Computers Manufacturer's Group Is a consortium of over 450 industrial computer product.
RCE Platform Technology (RPT)
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
1 Network Packet Generator Characterization presentation Supervisor: Mony Orbach Presenting: Eugeney Ryzhyk, Igor Brevdo.
Interface of DSP to Peripherals of PC Spring 2002 Supervisor: Broodney, Hen | Presenting: Yair Tshop Michael Behar בס " ד.
Cambodia-India Entrepreneurship Development Centre - : :.... :-:-
Messages from our friends a few slides about xTCA projects of colleagues that did not manage to come to Perugia M. Joos, CERN17th meeting of the xTCA IG.
David Nelson STAVE Test Electronics July 1, ATLAS STAVE Test Electronics Preliminary V3 Presented by David Nelson.
Development of an ATCA IPMI Controller Mezzanine Board to be used in the ATCA developments for the ATLAS Liquid Argon upgrade Nicolas Dumont Dayot, LAPP.
Installing software on personal computer
LKr readout: present and future R. Fantechi 30/8/2012.
The Train Builder Data Acquisition System for the European-XFEL John Coughlan, Chris Day, Senerath Galagedera and Rob Halsall STFC Rutherford Appleton.
Xilinx at Work in Hot New Technologies ® Spartan-II 64- and 32-bit PCI Solutions Below ASSP Prices January
Emulator System for OTMB Firmware Development for Post-LS1 and Beyond Aysen Tatarinov Texas A&M University US CMS Endcap Muon Collaboration Meeting October.
Gunther Haller SiD LOI Meeting March 2, LOI Content: Electronics and DAQ Gunther Haller Research Engineering Group.
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
Update of IBL ROD proposal based on RCE/CIM concept and ATCA platform Rainer Bartoldus, Andy Haas, Mike Huffer, Martin Kocian, Su Dong, Emanuel Strauss,
Update on DAQ Upgrade R&D with RCE/CIM and ATCA platform Rainer Bartoldus, Martin Kocian, Andy Haas, Mike Huffer, Su Dong, Emanuel Strauss, Matthias Wittgen.
Tuesday September Cambridge1 GDCC “next replacement of the LDA” Franck GASTALDI.
Department of Particle Physics & Astrophysics Representing: Rainer Bartoldus Ric Claus Gunther Haller Ryan Herbst Martin Kocian Chris O’Grady Jim Panetta.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Preliminary Design Review: Hub Implementation Dan Edmunds, Wade Fisher, Yuri Ermoline, Philippe Laurens Michigan State University 01-Oct-2014.
Pulsar II Hardware Overview Jamieson Olsen, Fermilab 14 April 2014
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Department of Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
Department of Particle Physics & Astrophysics Introduction to the RPT & its concepts. Roadmap to today’s lectures and lessons. CERN follow-on to the Oxford.
Micro-Research Finland Oy Components for Integrating Device Controllers for Fast Orbit Feedback Jukka Pietarinen EPICS Collaboration Meeting Knoxville.
SLAC Particle Physics & Astrophysics Future Development and Direction RCE Training Workshop Michael Huffer, 15 June, 2009.
Increasing Web Server Throughput with Network Interface Data Caching October 9, 2002 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
Department of Particle Physics & Astrophysics ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, November 19, 2008.
M. Joos, PH/ESE1 ATCA for ATLAS - status Activities of the working group PH/ESE xTCA evaluation project Next steps.
XTCA projects (HW and SW) related to ATLAS LAr xTCA interest group - CERN 07/03/2011 Nicolas Letendre – Laurent Fournier - LAPP.
PROJECT - ZYNQ Yakir Peretz Idan Homri Semester - winter 2014 Duration - one semester.
Serial Data Link on Advanced TCA Back Plane M. Nomachi and S. Ajimura Osaka University, Japan CAMAC – FASTBUS – VME / Compact PCI What ’ s next?
Lecture 12: Reconfigurable Systems II October 20, 2004 ECE 697F Reconfigurable Computing Lecture 12 Reconfigurable Systems II: Exploring Programmable Systems.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
Deliverables, Cost, Manpower, Schedule & Maintenance Su Dong CSC Readout Replacement CDR Oct/8/
Gunther Haller SiD Meeting January 30, 08 1 Electronics Systems Discussion presented by Gunther Haller Research Engineering.
Introduction CSC ROD review 8 October 2012 Andrew J. Lankford University of California, Irvine 1.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Week1: Introduction to Computer Networks. Copyright © 2012 Cengage Learning. All rights reserved.2 Objectives 2 Describe basic computer components and.
Department of Particle & Particle Astrophysics Modular Data Acquisition Introduction and applicability to LCLS DAQ Michael Huffer,
RCE Project Update and Plans Matt Weaver, July 14, 2015 SLAC Technology Innovation Directorate Advanced Data Systems Department.
IPMI developments at LAPP dec 15, 2011IPMI developments at LAPP, L.Fournier1 Alain BAZAN, Fatih BELLACHIA,Sébastien CAP, Nicolas DUMONT DAYOT, Laurent.
New ATCA compute node Design for PXD Zhen-An Liu TrigLab, IHEP Beijing Feb , 6th International Workshop on DEPFET Detectors and Applications.
Consideration of the LAr LDPS for the MM Trigger Processor Kenneth Johns University of Arizona Block diagrams and some slides are edited from those of.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
E. Hazen - DTC1 DAQ / Trigger Card for HCAL SLHC Readout E. Hazen - Boston University.
Possible replacement of VME Philippe Farthouat, CERN.
L1Topo Hardware Status & Plans Uli Schäfer 1 B.Bauß, V.Büscher, U.Schäfer, E.Simioni, S.Tapprogge, A. Vogel.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Johannes Lang: IPMI Controller Johannes Lang, Ming Liu, Zhen’An Liu, Qiang Wang, Hao Xu, Wolfgang Kuehn JLU Giessen & IHEP.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- ACES ACES 2011 MicroTCA in CMS E. Hazen, Boston University for the CMS collaboration.
AMC13 Project Status E. Hazen - Boston University
Rack Design Proposal as of 01/03/2013
RCE Application Workshop
The Train Builder Data Acquisition System for the European-XFEL
MicroTCA Common Platform For CMS Working Group
AT91RM9200 Boot strategies This training module describes the boot strategies on the AT91RM9200 including the internal Boot ROM and the U-Boot program.
NetFPGA - an open network development platform
TTC setup at MSU 6U VME-64 TTC Crate: TTC clock signal is
Presentation transcript:

Department of Particle Physics & Astrophysics CSC ROD Replacement Conceptual Design Review New ROD Complex (NRC) Overview & Physical Design Michael Huffer, SLAC National Accelerator Lab CERN, October, 8, 2012 Representing: Rainer Bartoldus (SLAC) Richard Claus (SLAC) Anthony DiFranzo (UCI) Raul Murillo Garcia (UCI) Nicoletta Garelli (SLAC) Ryan T. Herbst (SLAC) Andrew J. Lankford (UCI) Andrew Nelson (UCI) James Russell (SLAC) Michael Schernau (UCI) Su Dong (SLAC) V3

Department of Particle Physics & Astrophysics 2 The NRC (New ROD Complex) Overview –The NRC as a physical object & its interfaces The NRC’s ATCA (Advanced TeleCommunication Architecture) shelf –Choice + power + management The NRC’s ATCA Front-Board (the COB, Cluster-On-Board) –The COB’s RCEs (Reconfigurable-Cluster-Element) –Concepts and implementation (the mezzanine board) The RCE’s Protocol-Plug-Ins The RCE’s CE (Cluster-Element) The CE’s software services The NRC’s ATCA RTMs (Rear-Transition Modules) –One RTM for on-detector-electronics (The CSC RTM) –One RTM for the ROS (The SFP RTM) Development & installation Long term maintenance Summary

Department of Particle Physics & Astrophysics ATCA shelf –Full mesh fabric (10GE) –Dual-star base (TTC & busy) –5 Front-Boards (COB) Differentiated only by firmware/software 4 FEX boards (rcvs input & FEX) 1 Formatter board (Formats events, sends to ROS) –5 RTMs 4 CSC (8 rcv + 4 xmt SNAP-12) 1 SFP (16 transceivers) Shelf power supply –+ 48 VDC Control Processor –A commodity, rack-mounted PC –Hosts TDAQ interfaces 3 The NRC (physical) block diagram & interfaces…

Department of Particle Physics & Astrophysics 4 NRC ATCA shelf The boards used by the NRC are PICMG 3.0 compliant… –Will (co)operate in any ATCA shelf We are mostly, if not entirely, agnostic on shelf choice, including… –Orientation (horizontal or vertical) –Airflow direction (up/down, front/back or left/right) –Power supplied either internally or externally Minimum shelf requirements: –Five (5) or more slots –Power and cooling for up to 1.5 Kilowatts –Full-mesh backplane –Access (through Ethernet) to its Shelf Manager In all other shelf requirements we will be guided by ATLAS Baseline assumes (but does not require) external power –We believe that maps best to current DCS model, however… In terms of shelf monitoring/control we will be guided by DCS

Department of Particle Physics & Astrophysics Is an ATCA Front-Board which is both carrier and interconnect –Bays hold mezzanine boards which contain RCEs –Two types of bays (Data Transport/Data Processing Module) –One DTM. Contains IPM Controller and one RCE –IPMC interacts with Shelf Manager (purpose built IPMC) –DTM RCE manages the two interconnects described below –Four DPMs. Contain 2 RCEs, each managing data from an RTM –Those RCEs provide FEX and Formatter function –Interconnects enable communication between RCEs (both inter & intra) –Two independent interconnects fabric and base –Fabric switches 10G-Ethernet to & from: Bays, other boards (P2) & Front-Panel (SFP+) (shelf external) –Base fans out TTC and fans in BUSY to & from: Bays, other boards (P2), & FTM (shelf external) NRC specific Trivial implementation, uses commodity clock fan-out buffers 5 The COB (Cluster-On-Board)

Department of Particle Physics & Astrophysics 6 Block diagram of the COB (Cluster-On-Board) RTM connection 30 differential pairs/bay Command & Control paths

Department of Particle Physics & Astrophysics 7 Preproduction COB FTM + Base-Board IPM Controller (IPMC) DTM bay Zone 1 (power) Zone 2 Zone 3 (PICMG 3.8) 8 SFP+ RTM DPM bay (1 of 4) Differences between pre and production COB –Move SFP+ from RTM to Front-Board –Move IPMC onto DTM –Break up FTM into two different boards –Move resulting base-board next to P2

Department of Particle Physics & Astrophysics Employs System-On-Chip (SoC) technology (Xilinx Virtex-5-FX) –Neither FPGA, processor, or DSP (its all three) Contains both soft (programmable) and hard (resources) silicon. Silicon partitioned between: –System logic or CE (Cluster-Element) Reserves the processor resource Interfaces with DDR3 RAM and micro-SD flash –User logic or application specific Protocol-Plug-Ins 2 RCEs are realized as mezzanine board which plugs into a COB’s bays 8 The RCE (Reconfigurable-Cluster-Element) SO-DIMM (DDR3) MICRO-SD SOC

Department of Particle Physics & Astrophysics 9 The Protocol-Plug-In (PPI) Application specific firmware which communicates with the CE –May use either or both soft and hard silicon –Interact external or internal to the SOC Interface model is the plug & socket: –A protocol is arbitrary, application-specific logic –A protocol is wrapped with predefined system logic (the Plug) –The combination of a protocol and its plug is a PPI –CE contains a set of predefined Sockets –PPIs plug into sockets to enable communication The Implementation of the NRC is almost entirely within its PPIs… –The Input PPI, receives data from the CSC’s ASM-IIs –The TTC (Rx) PPI, decodes TTC and produces clock and L1A –The FEX PPI, produces hit channels from chamber’s data –The ROL PPI, transmits data to the ROS

Department of Particle Physics & Astrophysics 10 The CE (Cluster-Element) For data sent and received by its PPIs, the CE serves as: –A nexus for that data –A platform for the software which supervises its flow Its the platform executing the NRC’s application-specific software –Feature extraction on one COB & formatting on another It’s a CPU: –4 Gbytes of DDR3 5 Gbytes/sec –475 MHZ, PowerPC 440 Separate 32 Kbyte instruction and data caches APU (128-bit) interface to PPI sockets (instruction extension)

Department of Particle Physics & Astrophysics 11 CE Software Services Generic Bootstrap loader (For the NRC, boots RTEMS) O/S (RTEMS) –Multi-tasking R/T kernel –POSIX compliant interfaces –Open Source Persistency & file systems (SD-flash code & data (FAT-16)) Networking –Full TCP/IP stack (DNS, NTP, DHCP, NFS, etc.) –POSIX compliant interfaces PPI support (For NRC’s Input, FEX and ROL drivers) Debugging –GNU based –Both Local and remote (network) Diagnostics (built-in self tests and diagnostics) Development (GNU cross-development environment)

Department of Particle Physics & Astrophysics 12 The CSC RTM (fiber connections to on-detector electronics) Contains 8 SNAP-12 receivers + 4 SNAP-12 transmitters –Enough connections for up to 8 chambers. Each chamber: Uses 10/12 channels of one receiver Uses 5/12 channels of one transmitter Uses 1/12 channels of transmitter for pulser calibration –All 5 transmit channels are driven in common using a clock fan-out Paired with one of four FEX COBs –PIMG 3.8 connection between RTM and COB One RCE services one chamber It’s the only significant piece of hardware to be designed –But existing RTM serves as prototype

Department of Particle Physics & Astrophysics 13 SFP RTM (fiber connections to ROS) Contains four (4) SFP cages. Each cage contains four (4) SFP transceivers –Enough connections for up to 16 ROLs (Read-Out-Link) Paired with one Formatter COB –PICMG 3.8 connection between RTM and COB Two chambers of data are carried on one (1) ROL One RCE services two (2) ROLs Purchased from Detector R & D. Prototype exists (see below)

Department of Particle Physics & Astrophysics 14 Development & installation strategy Decouple (as much as practical) P1 needs. Plan to: –Develop (32 chamber) ASM-II emulator Four (4) COBs (reclaim from spare pool) Fabricate complement of CSC RTM (8 xmt, 4 rcv) –Develop trigger simulator (9 th RCE on any COB) See TTC (Tx) Plugin –Reuse existing LTPs and data simulators –Maintain parallel test-stands SLAC, UCI & CERN Building 188) Disturb as little as possible the current complex: –Necessary to (re)commission small wheel –Allows side by side comparisons Use adjoining rack? Stephanie Zimmermann: –“Rack space needs [for the NRC] to be found for this and details discussed with OPM" CSC rack Muon “upgrade” rack

Department of Particle Physics & Astrophysics 15 Long-term Maintenance 1.SLAC’s interest in ATCA is deep and abiding –SLAC now has more five years of experience with ATCA –ATCA/COB/RCE is a long-term strategic investment for SLAC Is and will be deployed in LCLS, LSST, LBNE, HPS, SID 2.Transfer experience and knowledge –One reason for UCI’s involvement 3.Align our (NRC) ATCA usage with ATLAS –Leverage large knowledge pool 4.Provide for spares and include redundancy –NRC has generous (~100 %) spares, includes redundant ShM & P/S 5.“Maintenance is less onerous than one might expect”: –Field personnel are not expected to understand ATCA Complexity is in FRUs (Field-Replaceable-Units) not their shelf FRUs are not diagnosed, they are either failed-over or replaced –Reliability was one of ATCA key requirements Markus asks: “How does one maintain a new platform standard?”:

Department of Particle Physics & Astrophysics 16 Summary The NRC fits into a single (5 slot) ATCA shelf Despite change from VME to ATCA the NRC satisfies all its interfaces –CSC on-detector electronics & ROS fibers “plug in” –LTP and Busy module connections “plug in” –Partitioning & “stopless-removal” granularity remains the same Development strategy is to depend as little as possible on P1 schedule –Before shipment to CERN, NRC tested as complete unit –Emulate entire on-detector electronics (with 4 COBs) Installation strategy is to disturb as little as possible current complex –Simplifies (re) commissioning of the small wheel –Preserves the current complex as a reference platform Reuse of tools developed out of Detector R & D program transform design from a hardware to a software & firmware centric-design –Shelf, Power and Control Processor are commodity items –COB & SFP RTM “purchased” from detector R & D program