Download presentation
Presentation is loading. Please wait.
Published byAmy Caldwell Modified over 9 years ago
1
Upgrading the ATLAS Level-1 Calorimeter Trigger using Topological Information Yuri ERMOLINE, Michigan State University TWEPP 2010, Aachen, Germany, 20-24 September 2010 On behalf of the ATLAS TDAQ Collaboration ATLAS TDAQ Collaboration, The ATLAS Trigger/DAQ Authorlist, version 4.0, ATL-DAQ-PUB-2010-002, CERN, Geneva, 14 May 2010 http://cdsweb.cern.ch/record/1265604
2
1/16 Outline nCurrent L1 Calorimeter trigger system, limitations nNew hardware for topology functionality (CMM++) n Functionalities n Algorithms n Development scenario nRecent R&D n Backplane data transfer rates n Latency survey n Optical link studies n Technology demonstrator n Firmware studies nSummary/conclusions
3
2/16 Current L1 Calorimeter trigger system Counts of identified objects meeting specified thresholds. nRegion of Interest (ROI) information read-out only on L1Accept. 2 crates 4 crates
4
3/16 Limitations of current system nBased only on counts of objects nJets and em/tau clusters identified in different subsystems n But clusters can also look like jets n No way to distinguish when this happens nPossible solution n Include Jet/cluster positions in real time data path n Use this information to add topology-based algorithms at Level 1
5
4/16 Ideas on limited short-term topological upgrade nThe current L1Calo will remain mainly unchanged for next few years n Remain compatible with rest of running system. n Hardware components already 5 - 7 years old, not much freedom for modifications nMC simulations indicate some performance degradation of multiplicity-only algorithms at 1-2E34 nA promising solution: drive backplane at higher speed to add ROI positions to the real-time data path, enabling new algorithms based on event topology
6
5/16 Modifications required for limited upgrade PreProcessor 124 modules Cluster Processor 56 modules Jet/Energy 32 modules Merging 8 modules Merging 4 modules Digitized E T Calorimeter signals (~7200) Readout Driver 14 modules Readout data Readout Driver 6 modules Region-of- interest data 0.1 0.1 0.2 0.2 Receivers Analogue Sums Backplane New Topology Processor To CTP L1 Muon n Modify firmware in processor modules n Increase data transfer rate over crate backplane (40 Mbit/s -> 160 Mbit/s) n Replace merging modules with upgraded hardware n Eventual Topology Processor (TP)
7
6/16 Current CMM (Common Merger Module) n“Crate” FPGA on each CMM receives backplane data, produces crate/wide sums of indentified features n“System” FPGA collects crate results over LVDS cables, sends trigger output to CTP. nOn L1A, data and ROI readout via G-Links nAll CMMs identical n Several different firmware versions n Xilinx SystemAce
8
7/16 nComplete backward compatibility nFeature collection from upgraded CPM/JEM modules and transmitted to toplogy processor nSupports multiple optical tx and/or rx modules nTX: duplicate outputs to multiple processor nodes RX: Data merging for ev. internal topology processing without TP CMM++ functionality
9
8/16 Topology algorithms study nExamples using local topology information (single calo quadrant) n Identify spatial overlap between e/tau clusters and jets n Use local jet Et sum to estimate energy of overlapping e/tau object ðRequires jet energies to be added to real time data path nExamples using global topology n Non back-to-back jets n Rapidity gaps in eta and/or phi n Invariant transverse mass calculations n Jet sphericity nInitial studies look promising nN.B. CMM++ only option may limit global topology capabilities
10
9/16 CMM++ development scenario nHardware design with all present interfaces plus optical links for the new topological processor, one large FPGA nAdapt current CMM firmware to the new hardware for initial use, incrementally add new functionality nUpgrade CPM/JEM and CMM++ modules firmware for new data format and 160 Mb/s data transfer nTwo topology processing scenarios: n Separate TP processor crate n One CMM++ in system provides limited topological functionality if final TP not yet available
11
10/16 JEM-CMM backplane data transfer nEach CPM/JEM is sourcing 50 lines into two mergers at 40 Mbit/s nScope shots of JEM signals over the longest backplane track nBackplane tester (BLT) module was built: n Data from all CPM/JEM modules ð400 lines, individually deskewed ðForwarded clock, sink terminated n Eye > 3.7ns @ 160Mb/s n Bit error rate tests on backplane data BLT
12
11/16 CPM-CMM backplane data transfer nCPM tests demonstrate160 Mbit/s with reasonable signal quality n Test board alows to probe backplane signals in CMM slot n Termination on CMM side improves signal quality, ðbut increases dissipated power in CMM slot (a problem if done inside FPGA) n Also tested signals with termination on source only
13
12/16 L1Calo trigger latency nLatency measurements in the complete L1Calo system: nDetailed breakdown: nPart of total L1 latency! TCPPReceiverRPPPPPMCPMCMM ABCDEF T 030527273851045 302220112 960 (shall subtract 35+25 ns) “RMB”
14
13/16 10 Gb ethernet-based optical link nInexpensive 30 Gbit/s link using commercial components n Xilinx Spartan-3 FPGA n 10Gb Enet transceiver TLK3114SC (XAUI) n SNAP12 Tx/Rx pair nCan run links synchronously with LHC clock n Send alignment characters in some LHC bunch gaps for link maintenance nCan drive multi/Gbit links with TTCrx clock outputs n Reduce jitter with LMK03033CISQ clock conditioner Link board prototype
15
14/16 Technology demonstrator project nGOLD (Generic Optical Link Demonstrator) nNew, state-of-art new FPGAs n Xilinx XC6VLXxxxT n Xilinx XC6VHX380T ð640 I/O, 48 GTX, 24 GTH nOptical links (6.4-10 Gbit/s) n SNAP12 and n Avago optolinks n Optical backplane connectors nATCA n Backplane/form factor n Power distribution and local conversion
16
15/16 Porting the existing CMM firmware nAim: port Jet CMM firmware to target Virtex 6 device: n Use existing VHDL with minimal changes n Update architecture-specific features n Estimate I/O requirements n Realistic user constraint file (UCF) for ISE timing simulation nResults so far n Existing VHDL ports easily, uses ~2% of available resources n By emulating Glink in FPGA, can keep I/O count below 600/640 pins ðNot including dedicated multi- Gbit links
17
16/16 Summary/conclusions nWant to improve existing L1Calo system n Maintain trigger quality up to 1-2E34 luminocity n Provide best possible algorithms for ATLAS nAdd topological trigger capability n Minimal changes to existing system n Low impact on other ATLAS components nR&D projects/studies well under way n Backplane data transfer rates n Technology demonstrator n Firmware studies nLooks promising…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.