Trigger Upgrade Planning

Slides:



Advertisements
Similar presentations
9 Mar 2011CMS Calorimeter Trigger Upgrade1 Sridhara Dasu (a), Rob Frazier (b), Tom Gorski (a), Geoff Hall (c), Greg IIes (c), Pam Klabbers (a), Dave Newbold.
Advertisements

06-Dec-2004 HCAL TriDAS 1 TriDAS Status HF Luminosity HF Trigger Slicing.
GEM Design Plans Jason Gilmore TAMU Workshop 1 Oct
C. Foudas, Imperial CMS Upgrades Workshop FNAL – Trigger Introduction Status of the CMS Trigger Upgrade Activities: –uTCA and Optical Link upgrades.
FED RAL: Greg Iles5 March The 96 Channel FED Tester What needs to be tested ? Requirements for 96 channel tester ? Baseline design Functionality.
Samuel Silverstein Stockholm University CMM++ firmware development Backplane formats (update) CMM++ firmware.
Tom Gorski, U. Wisconsin, July 20, 2009 SHLC RCT - 1 CMS Calorimeter Trigger SLHC Regional Calorimeter Trigger System Design and Prototypes Tom Gorski.
Trigger Workshop: Greg Iles Feb Optical Global Trigger Interface Card Dual CMC card with Virtex 5 LX110T 16 bidirectional.
T. Gorski, et al., U. Wisconsin, November 08, 2011 Calorimeter Trigger Upgrade - 1 Upgrade Cal. Trigger R&D M. Bachtis, A. Belknap, M. Cepeda, S. Dasu,
MicroTCA in CMS Not Official! Just my opinions... Greg Iles 6 July 2010.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
E. Hazen - DTC1 DAQ / Trigger Card for HCAL SLHC Readout E. Hazen - Boston University.
A demonstrator for a level-1 trigger system based on μTCA technology and 5Gb/s optical links. Greg Iles Rob Frazier, Dave Newbold (Bristol University)
Mitglied der Helmholtz-Gemeinschaft Status of the MicroTCA developments for the PANDA MVD Harald Kleines, ZEL, Forschungszentrum Jülich.
CMS GCT ESR: Firmware Based on CMS GCT Algorithm Review & Firmware Status Report by Magnus Hansen Magnus Hansen, Greg Iles, Matt Stettler 10.
E. Hazen -- Upgrade Meetings1 AMC13 Project Development Status E. Hazen, S.X. Wu - Boston University.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- ACES ACES 2011 MicroTCA in CMS E. Hazen, Boston University for the CMS collaboration.
E. Hazen -- CMS Week HCAL Back-End MicroTCA Upgrade Status E. Hazen Boston University.
E. Hazen -- Upgrade Week1 AMC13 Project Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- xTCA IG1 AMC13 Project Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- Upgrade Week1 HCAL uTCA Readout Plan E. Hazen - Boston University for the CMS HCAL Collaboration.
E. Hazen -- TWEPP MicroTCA Developments in CMS E. Hazen - Boston University for the CMS Collaboration TWEPP 2012 – Oxford.
10/28/09E. Hazen FNAL Workshop1 HCAL DAQ / Timing / Controls Module Eric Hazen, Shouxiang Wu Boston University.
E. Hazen1 New DCC Status and Plans Jim Rohlf, Eric Hazen, Arno Heister, Phil Lawson, Jason St John, Shouxiang Wu Boston University.
E. Hazen -- Upgrade Meetings1 AMC13 Project DAQ and TTC Integration For MicroTCA in CMS Status Report E. Hazen - Boston University for the CMS.
E. Hazen -- HCAL Upgrade Workshop1 MicroTCA Common Platform For CMS Working Group E. Hazen - Boston University for the CMS Collaboration.
E. Hazen1 AMC13 Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen1 Fermilab CMS Upgrade Workshop November 19-20, 2008 A summary of off-detector calorimeter TriDAS electronics issues Eric Hazen, Boston.
Phase2 Level-0 Calo Trigger ● Phase 2 Overview: L0 and L1 ● L0Calo Functionality ● Interfaces to calo RODs ● Interfaces to L0Topo Murrough Landon 27 June.
E. Hazen1 AMC13 Project Status E. Hazen - Boston University for the CMS Collaboration.
E. Hazen -- Upgrade Week1 MicroTCA Common Platform For CMS Working Group E. Hazen - Boston University for the CMS Collaboration.
MicroTCA Introduction ● New telecom standard from PICMG ● 75mm high cards with ~ 10Gbit/sec serial backplane (up to 20 ports) ● Selected for SLHC trigger.
E. Hazen - CMS Electronics Week
DAQ / Trigger Card for HCAL SLHC Readout E. Hazen - Boston University
26 Sept 2013E. Hazen - TWEPP / AMC13 Module CMS MicroTCA Overview E. Hazen – Boston University Representing the work of J. Rohlf, S.X. Wu, A. Heister,
Counting Room Electronics for the PANDA MVD
DAQ / Trigger Card for HCAL SLHC Readout E. Hazen - Boston University
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
M. Bellato INFN Padova and U. Marconi INFN Bologna
AMC13 Project Status E. Hazen - Boston University
DAQ and TTC Integration For MicroTCA in CMS
TWEPP 2010 – Aachen Development of a MicroTCA Carrier Hub
AMC13 T1 Rev 2 Preliminary Design Review E. Hazen Boston University
The First HCAL Readout Schedule! (7/22/99 UMD meeting)
AMC13 Module CMS MicroTCA Overview E. Hazen – Boston University
E. Hazen - Back-End Report
Topics SRAM-based FPGA fabrics: Xilinx. Altera..
xTCA in CMS Magnus Hansen Thanks to many CMS collaborators
L1Calo Requirements on the DPS
AMC13 Status Report AMC13 Update.
A Compact Trigger Architecture for SLHC
L1Calo Phase-1 architechure
L1Calo upgrade discussion
SLHC Calorimeter Trigger
The Matrix Card and its Applications
uTCA A Common Hardware Platform for CMS Trigger Upgrades
Task T - CMS at LHC Global Calorimeter Trigger
CMS EMU TRIGGER ELECTRONICS
ATLAS L1Calo Phase2 Upgrade
Old ROD + new BOC design plans
MicroTCA Common Platform For CMS Working Group
MicroTCA Common Platform For CMS Working Group
Status of the oSLB project
ECAL OD Electronic Workshop 7-8/04/2005
xTCA in CMS Magnus Hansen Thanks to many CMS collaborators
MicroTCA A look at different options...
DAQ Interface for uTCA E. Hazen - Boston University
Fixed Latency Serial Links with FPGA-embedded SerDes for SuperB
LIP and the CMS Trigger Upgrade On behalf of the LIP CMS Group
Presentation transcript:

Trigger Upgrade Planning Greg Iles, Imperial College

Matrix, Matt Stettler, John Jones, Magnus Hansen, Greg Iles We’ve experimented with new technologies... Mini CTR2, Jeremy Mans Aux Pic Schroff ! Matrix, Matt Stettler, John Jones, Magnus Hansen, Greg Iles OGTI, Greg Iles Aux Card, Tom Gorski

I’ll need a quadruple espresso after this... and discussed ideas... I’m right No, I’m right No, I’m right No, I’m right Where is the coffee ? I’ll need a quadruple espresso after this...

Time for a plan... Why is it urgent? Subsystems already designing new trigger electronics. In particular HCAL, but others also have upgrade plans. If comprehensive plan not adopted now we will squander the opportunity to build a next generation trigger system Interface between front-end, regional and global trigger is critical Requirements driven by Regional Trigger (RT) => Would be best to try and define new Trigger now before HCAL plans fixed.

Regional Trigger Part I How best to process ~5Tb/s? The initial idea was to have a single processing node for all physics objects Red boundary defines single FPGA Single processing node: Share data in both dimensions to provide complete data for objects that cross boundaries Very flexible, but Jets can be large ~15% of image in both η and φ Very large data sharing required => Very inefficient => Big system

Time Multiplexed Trigger - TMT Regional Trigger Part II Alternative solutions: Split processing into 2 stages Fine procesing for electrons / tau jets Coarse processing for jets Pre cluster physics objects Build possible physics objects from data available then send to neigbours for completion More complex algorithms. Less flexible. Both currently applied in CMS But believe we have a better solution... Time Multiplexed Trigger - TMT

Time Multiplexed Trigger Trig Partition 0 Trig Partition 1 1 bx 8 bx Trigger Partition 8 9 bx Time Red boundary = FPGA Green boundary = Crate

Alternative location for Services Services: CLK, TTC, TTS & DAQ PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 The Trigger Partition Crates PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 Alternative location for Services PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 PWR2 CMS AUX/DAQ PWR1 MCH2 MATRIX MINI-T MCH1 12 8 bx = n+8 bx = n+7 bx = n+6 bx = n+5 bx = n+4 bx = n+3 bx = n+2 bx = n+1 bx = n+0 Global Trigger Card Regional Trigger Cards Services: CLK, TTC, TTS & DAQ

Regional Trigger Requirements: Use Time Multiplexed Trigger (TMT) as baseline design Flexible All HCAL, ECAL, TK & MU data in single FPGA Minimal boundary constraints Can be spilt into seprate partitions for testing e.g. ECAL, HCAL MU, TK can each have a partition at the same time Simple to understand Redundant Loss of a partition results in loss of max 10% of the trigger rate Can easily switch to back up partition (i.e. software switch) But what constraints does this impose on TPGs...

standard functionality Services MCH1 providing GbE and standard functionality MCH2 providing services - DAQ (data concentration) - LHC Clock - TTC/TTS See also: DAQ/Timing/Control Card (DTC), Eric Hazen, uTCA Aux Card Status - TTC+S-LINK64, Tom Gorski

VadaTech CEO / CTO at CERN Nov 19th Tongue 1: AMC port 1 DAQ GbE Tongue 2: AMC port 3 TTC / TTS Tongue 2: AMC Clk3 LHC Clock VadaTech CEO / CTO at CERN Nov 19th

CMS MCH: Service Needs to be designed.... MCH2-T1 MCH2-T2 Advantages: SFP+ (10GbE) Local DAQ V6 XC6VLX240T 24 links SFP+ (10GbE) Global DAQ 12 MCH2-T1 Header – 80 way Needs to be designed.... DAQ from AMC Port 1 MagJack TTC MCH2-T2 CLK40 to AMC CLK3 12 Advantages: - All services on just one card - High Speed Serial on just Toungue 1 - Dual DAQ outputs - Tounges 3/4 (fat, ext-fat pipes) spare for other apps CLK2 (Unused) 12 Header – 80 way TTC to AMC Port 3 Rx 12 TTS from AMC Port 3 Tx 12

Installation scenario Ideally wish to install, commission and test new trigger system in parallel with existing trigger Following is an example of how this might happen HCAL Upgrade + Test Time Multiplexed Trigger ECAL Upgrade Full Time Multiplexed Trigger System Incorporate TK + MU Trigger

Current System ECAL TPG HCAL TPG ECAL HCAL RCT 4x 1.2Gb/s 4x 1.2Gb/s Copper To GCT and GT

Optical interface to RCT Stage 1: HCAL Upgrade ECAL HCAL ECAL TPG HCAL TPG RCT Many 4.8Gb/s 4x 1.2Gb/s Fibre Optical interface to RCT New RT+GT Time multiplexed data 1/10 to 1/20 of events Tech Trigger to existing GT

Tech Trigger to existing GT ECAL Stage 2: ECAL Upgrade HCAL ECAL TPG HCAL TPG RCT Many 4.8Gb/s 4.8 Gb/s Other ECAL TPGs Duplicate New RT+GT Time multiplexed data 1/10 to 1/20 of events ECAL TM Time Multiplexer Rack Tech Trigger to existing GT

Stage 3: Upgrade to New Trig ECAL TPG HCAL TPG ECAL TM ECAL HCAL New RT+GT ECAL TPG HCAL TPG New RT+GT Other ECAL TPGs Duplicate New RT+GT ECAL TM

Stage 4: Add Tk + Mu Trig ECAL TPG HCAL TPG MU TPG TK TPG ECAL TM ECAL New RT+GT ECAL TPG HCAL TPG New RT+GT MU TPG Other ECAL TPGs Duplicate New RT+GT TK TPG ECAL TM

Technology choice.. Use V6 not V5 XC5VTX150T XC5VTX240T XC6VLX550T Links 40 @ 5.0Gb/s 48 @ 5.0Gb/s 36 @ 6.5Gb/s Slices (k) 23 37 86 Logic Cells (k) 148 240 550 CLB FlipFlops (k) 92 150 687 Distributed RAM (kbits) 1,500 2,400 6,200 BRAM (36kbits) 228 324 632 Cost N/A 4.7k 4.5k A single Virtex-6 FPGA CLB comprises two slices, with each containing four 6-input LUTs and eight Flip-Flops (twice the number found in a Virtex-4 FPGA slice), for a total of eight 6-LUTs and 16 Flip-Flops per CLB. A single Virtex-5 CLB comprises two slices, with each containing four 6-input LUTs and four Flip-Flops (twice the number found in a Virtex-4 slice), for a total of eight 6-LUTs and eight Flip-Flops per CLB

Plan for the nexy 5 years: Technical Proposal Review of Proposal Resources Required Financial / Manpower / Time Hardware development Prototypes Procesing Card Service Card Optical SLBs Custom Crate Cooling uTCA crates have front to back airflow Firmware Serial Protocol Slow Control Algorithms Software Low level drivers Communication Interfaces to legacy systems System

Road Map 4/2012 4/2011 4/2010 Firmware & Software Development Hardware No Gant Chart yet... But you wouldn’t be able to read it... USC55 trigger partition Integration of OSLBs in USC55 4/2012 904 system with Matrix II cards Delivery of Matrix II and OSLBs 4/2011 Demo system with HCAL HW & Matrix I cards Delivery of uTCA Regional Trigger Crate and HCAL HW Firmware & Software Development 4/2010 Hardware Development Technical Trigger Proposal

VadaTech CEO / CTO at CERN Nov 19th => Trigger Upgrade Proposal by April 2010 End of Part I The next bit is confusing so before I lose you... VadaTech CEO / CTO at CERN Nov 19th

Implications for on TPGs Can we put time multiplexing inside TPG FPGA? Maximum number of trigger partitions limited by latency. Assume between 9 and 18 partitions Hence each TPG requires a minimum of 9 outputs Each fibre carrying 8 towers/bx x 9 bx = 72 towers Transmist 4 towers in η over ¼ of φ Assumed time multiplex in φ, but η also possible But HCAL TPG designed to generate 36-40 towers This is very impresive, but still need factor of 2 Revist TPG idea...

TPG design Assumed TPG is single FPGA Why: simplicity, board space, power, ~72 36 ~3000 links All TPGs TPG TPG 18 9 792 links Too much input data? No, too many serial links. But only just.... Close to FPGA data BW limit Assumes input is 3.36 Gb/s effective GBT Trigger partitions reduced to 9. Requires 5.0Gb/s links. Not possible with GBT protocol (insufficient FPGA resources) Approx 4:1 input to output ratio

? TPGs with GBT protocol... 36 36 Parallel GBT GBT 36 180 LVDS pairs @ 320Mb/s 180 LVDS pairs @ 640Mb/s TPG TPG TPG 9 9 9 Possible, but not with GBT protocol 36x GBTs power ~36W to 72W real estate = 92cm2 cost = 4320 SFr Can select output clk i.e. doesn’t have to be LHC Will future FPGAs have std I/O? XC6VLX240T IO = 720 (24,0)

Time Multiplexing Rack The alternative: Time Multiplexing Rack Use FPGA to decode GBT data Only use ½ of available links to decode GBT otherwise no logic left for other processing Use different GTX blocks for GBT-Rx and RT-Tx inside TPG Allows trigger to decouple from LHC clock if required. CMS arranged in blocks of constant φ strips Would allow mapping to constant η strips Consequence At least x3 to x5 more HCAL hardware Time muliplex rack Larger latency 12 12 TPG TPG ... x12 TPGs 3 3 Time Multiplex 2x18 or 3x12 or 4x9

Not needed for HCAL if we get x2 BW at TPG Time multiplex rack Assumes tower resolution from HF 18x22 regions 16 towers per region 792 fibres 12bit data, 8B/10B, 5Gb/s Time multiplex card may have 36 in/out (2 crates x11 cards) 18 in/out (4 crates x 11 cards) Fibre Patch Fibre Patch Fibre Patch Fibre Patch uTCA Crate – 11x TMs 36U or 48U Not needed for HCAL if we get x2 BW at TPG uTCA Crate – 11x TMs 12 way ribbon = ¼ η, 1 region in φ 9 ribbons can cover ½ φ Hence patch-panel = ¼ η, ½ φ Fibre Patch Fibre Patch Fibre Patch Fibre Patch

Still requires some thought & discussion with TPG experts Are there are alternatives to Time Multiplex Racks if HCAL TPG stays the same... Yes: e.g. return to concept of fine/coarse processing for electrons/jets processing i.e. jets at 2x2 tower resolution Requires less sharing (e.g. just 2, rather than 4 tower overlap for fine processing) Alllows a more parallel architecture TPG would have to provide 2 towers in η and ¼ of φ But... Is it wise to separate fine & coarse procesing ? Still requires some thought & discussion with TPG experts

VadaTech CEO / CTO at CERN Nov 19th End... VadaTech CEO / CTO at CERN Nov 19th Vadatech VT891

Extra

Mux: Put all data from 1bx into single FIFO Cavern Data into TPG ½ region in η, all φ regions OR 1 region in η, ½ of all φ regions Mux: Put all data from 1bx into single FIFO n+0 n+1 n+2 n+3 n+4 n+5 n+6 n+7 n+8 Single fibre to each processing blade

Decode GBT Only instrument 20/24 links of XC5VFX200T, 96% logic cell usage F. Marin, TWEPP-09. Altera results slightly better Cannot use GBT links to drive data into processing FPGA Options (a) Use powerful FPGAs to simply to decode GBT protocol Seems wasteful. SerDes Tx/Rx may have to use same clk (b) Use mulltiple GBTs Diff pairs = 200 @ 320Mb/s lanes. OK if diff pairs remain on FPGAs! Power = 20W (assume less power because less I/O) Components = 20 (16 mm x 16 mm, 0.8 mm pitch) Cost = 120x20 = 2400 SFr ! (c) Use Xilinx SIRF both on & off detector If they let us...