MicroTCA in CMS Not Official! Just my opinions... Greg Iles 6 July 2010
Greg Iles, Imperial College2 My background: Calorimeter Trigger Hadron Calorimeter Electromagnetic Calorimeter Trigger RCT GCT 0.3 Tb/s 3 Tb/s How can we improve the trigger?
6 July 2010Greg Iles, Imperial College3 Requirements for a trigger... Must process Tb/s Not a problem, just make it parallel, but.... – Need to build physics objects, which don’t observe detector granularity! Data sharing Data duplication – Need to sort physics objects Avoid multi stage sort to minimise latency Restricts number of “physics builders” due to fan in constraints – Only have approx 1us Each serialisation is 100ns - 200ns
6 July 2010Greg Iles, Imperial College4 High Speed Serial Link Technology Pros – Significantly higher data rate than standard I/O – Easily connected to optics Serial backplanes available: – e.g. MicroTCA: Based on Advanced Mezzanine Card (AMC) developed for ATCA – Also ATCA, CompactPCISerial, VPX Serial cross-points available – Wire speed duplication of data – 144x144 at 10Gb/s Matrix card: Part of the GCT project Design by Matt Stettler (LANL)
6 July 2010Greg Iles, Imperial College5 MTCA.0 R1.0, July 2006 Built around the mezzanine card (AMC) designed for ATCA ATCA, December 2002
6 July 2010Greg Iles, Imperial College6 ATCA Card from PICMG Short Form Spec
6 July 2010Greg Iles, Imperial College7 AMC Card Originally intended as hot-swappable mezzanine standard for ATCA but soon used as the basis for the MicroTCA standard 20 bidirectional diff pairs to at 12.5Gb/s (not yet demo’d) – Normally operates at Gb/s 5 clocks Protocol agnostic – PCIe, SRIO, GbE 6 form factors – 74 or 149 mm wide – 13, 18 or 28 mm high – 180 mm deep Power supply: 80W (max) on +12V Connector: 85 pin (single sided) or 170 pin (double sided) edge connector
6 July 2010Greg Iles, Imperial College8 μTCA Best thing about μTCA: Very flexible – Also possibly the worst thing.... Built around a MicroTCA Carrier Hub (MCH) – System management via IPMI (Integrated Peripheral Management Interface). Uses I2C – GbE – SATA/SAS – Clock distribution to/from slots – Fat Pipe (x4 lanes) Redundant system possible Up to 12+1 AMC cards Matt Stettler, LANL pioneered uTCA in CMS
6 July 2010Greg Iles, Imperial College9 - 2U / 19” chassis - Slots for up to 12 AMCs - Cooling for 40W per slot - 6 mid size (single or double width) AMCs - AC or DC PSU - Single star backplane MCH - Fat-pipe mezzanines for: - PCIe, 10GB-Eth, Serial RapidIO - Clocks
6 July 2010Greg Iles, Imperial College10 Vadatech VT Full width AMC slots MCH2 MCH1 MCH1 providing GbE and standard functionality MCH2: LHC-CLK, TTC & TTS and DAQ Concentrator Dual Star, Telecom Clocks
6 July 2010Greg Iles, Imperial College11 GbE SATA or SAS FatPipe x4 lanes e.g. PCIe SRIO Clk Out Clk In Fast Control (Not SerDes) TTC-Out TTS-In DAQ-In or Switch LHC 40MHz Clk Reserved Alt DAQ? Alt Comm? Not shown: 8 spare ports 2 spare clks Dual star
6 July 2010Greg Iles, Imperial College12
6 July 2010Greg Iles, Imperial College13 Towards a CMS system DTC by Eric Hazen, – Boston University Purpose – Distribute Clock – Distribute Fast Control – Receive Fast Feedback – Optionally DAQ concentrator Trigger cards send only 1% of data to DAQ Fixed Latency, NOT Serdes, 800Mb/s Prototype built on MCH from NAT, but not required. – Vendor independent
6 July 2010Greg Iles, Imperial College14 MicroTCA Disadvantages Board thickness = 1.6mm (limited by edge connector) – New Harting connector = 2.0mm Limited number of backplane I/O – 8 bidirectional I/O spare – Depending on application may be able to increase to 16 – Not suitable for Full mesh backplane PCIe v Telecom clocks – PCIe system stole the AMC-Clk3 used in redundant telecom systems Required because PCIe usually uses a spread spectrum clock distributed to all cards PCIe can optionally operate without a “Fabric” clock No Rear Transition module – Not convinced this is an issue for us
6 July 2010Greg Iles, Imperial College15 Communication Protocol format for register read/write capability over large latency communication medium – i.e GbE in this case – Single data packet, multiple transactions UDP/IP can be implemented in VHDL – Two versions already exist TCP/IP usually implemented with processor PowerPC hardcore MicroBlaze soft-core If hardware accelerated > 500Mb/s UDP, or TCP EMAC PHY I2C Core GTX Core DAQ Core Transaction Engine
6 July 2010Greg Iles, Imperial College16 Software: Architecture Hardware controller PC separates the Control LAN and the User code from the Hardware LAN and the devices Unlike current TS architecture, all network traffic hidden from end user Made possible by common interface layer within the firmware and mirrored within the software Single Multicore host Hardware LAN Fabric Control LAN Fabric Kernel Async. IO services Transport Adapter Multiplexer layer Network Interface User code
6 July 2010Greg Iles, Imperial College17 Gaining momentum... Jeremy Manns & Erich Frahm, Minnesota University – Specified protocol for large latency communication bus i.e. Ethernet in this case Single data packet, Multiple transactions – Provided UDP Verilog Core Rob Frazier & Dave Newbold, Bristol University – Provide HAL to access cards – Online software control – Trek (?) hardware accelerated processor solution for TCP/IP Wim Beaumont, Universiteit Antwerpen – TCP/IP, IPMI, Card infrastructure Wesley Smith & Tom Gorski – TCP/IP with Xilinx microblaze
6 July 2010Greg Iles, Imperial College18 Outstanding Issues Crate cooling – Front-back, Top-bottom or flexible Rack cooling – Vertical air flow with heat exchangers inside rack – Heat exchanger inside rear door Some subtleties about Telecom/PCIe clock distribution Rear transition modules?
6 July 2010Greg Iles, Imperial College19 Rear Transition Module Physics xTCA working group Interested parties seem to be DESY and Schroff
6 July 2010Greg Iles, Imperial College20 Physics Profile for comparison
6 July 2010Greg Iles, Imperial College21 In row cooling In rack cooling – More efficient to cool a small amount of hot air close to the heat source, rather than large volume of luke warm air – Use hot/cold aisle containment to improve air flow and efficiency – As rack power dissipation has gone up cooling has moved closer to rack We have this in CMS! But industry seems keen to separate cooling/server racks e.g. “In Row” and “In Rack” cooling Cooling
Questions ? DRAFT document on MicroTCA in physics:
6 July 2010Greg Iles, Imperial College23 Hardware: MINI-T5 XC5VT150/240T SNAP12 / PPOD 120/60 Gb/s primary input/output QFSPs 40 Gb/s bidirectional 2x40 LVDS 800Mb/s MicroController