Download presentation
Presentation is loading. Please wait.
1
22nd March, 2005CPM FDR, Architecture and Challenges1 CPM architecture and challenges oCP system requirements oArchitecture oModularity oData Formats oData Flow oChallenges oHigh-speed data paths oLatency
2
22nd March, 2005CPM FDR, Architecture and Challenges2 CP system requirements oProcess –2.5 < η < 2.5 region o50x64 trigger towers per layer oTwo layers o8 bit data (0-255 GeV) oRelatively complex algorithm oOutput data to CTP o16 x 3 bit hit counts oEach hit condition is a combination of four thresholds oOutput data to RODs oIntermediate results oRoI data for RoIB oCluster Algorithms o4 x 4 x 2 cell environment oSliding window
3
22nd March, 2005CPM FDR, Architecture and Challenges3 System design considerations oSeveral major challenges to overcome oLarge processing capacity oData i/o, largely at input oLatency requirements oProcessing must be split over several modules working in parallel oBut overlapping nature of algorithms implies fan-out needed oModularity is compromise between competing requirements oHigh connectivity back-plane required for data sharing oData must be ‘compressed’ as much as possible oUse data reduction whenever possible oData serialisation at various speeds used to reduce i/o pin counts
4
22nd March, 2005CPM FDR, Architecture and Challenges4 System modularity oFull system o50 x 64 x 2 trigger towers oFour crates, each processing one quadrant in phi o50 x 16 x 2 core towers oEta range split over 14 CPMs o4 x 16 x 2 core towers oModule contains 8 CP FPGAs o4 x 2 x 2 core towers
5
22nd March, 2005CPM FDR, Architecture and Challenges5 Board Level Fan-out, input signals and back-plane oCPM has 64 core algorithm cells o16 x 4 reference towers oObtained from direct PPM connections (2 PPMs per CPM) oAlgorithm requires extra surrounding cells for ‘environment’ oOne extra below, two above o19 x 4 x 2 towers in all oFanout in phi achieved via multiple copies of PPM output data oFanout in eta achieved via back- plane
6
22nd March, 2005CPM FDR, Architecture and Challenges6 Internal Fan-out and the Cluster Processing FPGA Environment oCP FPGA processes 2x4 reference cells oAlgorithm requires 4x4x2 cells around reference oConvoluting these gives 5x7x2 FPGA environment oData received from 18 different serialiser FPGAs o6 on-board o12 through back-plane on-boardfrom leftfrom right from above ‘core’ cells from below
7
22nd March, 2005CPM FDR, Architecture and Challenges7 CPM Data formats – tower data o8 bit tower data oPPM peak finding algorithm guarantees any non-zero data is surrounded by zeroes oAllows data encoding/compression oTwo 8 bit towers converted to one 9 bit ‘BC-muxed’ data word oAdd odd-parity bit for error detection o160 input towers encoded in 80 x 10 bit data streams oSame format utilized for: oinput to CPM obetween serializer FPGA and CP FPGA two towers x 8 bits ‘bcmuxed’ 10 bit data 8 bit data bcmux bit parity bit
8
22nd March, 2005CPM FDR, Architecture and Challenges8 CPM data formats – hits and readout oCPM hits results: o16 x 3 bit saturating sums o8 sent to left CMM, 8 sent to right o8 x 3 = 24 results bits pluse 1 odd-parity bit added oDAQ readout oPer L1A, 84 x 20 bits data oBulk of data is BC-demuxed input data o10 bit per tower, eight bit data, 1 bit parity error, 1 bit link error o160 direct inputs x 10 bit data = 80 x 20 bits o48 bits hit data, 12 bits Bcnum, 20 bits odd-parity check bit oRoI readout oPer L1A, 22 x 16 bits data oBulk of data is individual CP FPGA hit and region location o16 bits + 2 bits location + 1 bit saturation + 1 bit parity error o8 FPGAs each have 2 RoI locations = 8 x 2 x 20 bits oRest is 12 bits Bcnum, and odd-parity check bit
9
22nd March, 2005CPM FDR, Architecture and Challenges9 CPM data flow: signal speeds oMultiple protocols and data speeds used throughout board oCare needed to synchronize data at each stage oThis has proved to be the biggest challenge on the CPM LVDS deserialiser Serialiser FPGA CP FPGA Hit Merger 400 Mbit/s serial data (480 Mbit/s with protocol) 40 MHz parallel data 160 MHz serial data 40 MHz parallel data Readout Controllers 640 Mbit/s serial data (800 Mbit/s with protocol)
10
22nd March, 2005CPM FDR, Architecture and Challenges10 CPM challenges: high-speed data paths o400 (480) Mbit/s input data oNeeded to reduce input connectivity o80 differential inputs plus grounds = 200 pins/CPM oPrevious studies of the LVDS chipset established viability oWorks very reliably with test modules (DSS/LSM) oStill some questions over pre-compensation and PPM inputs o160 MHz CP FPGA input data oNeeded to reduce back-plane connectivity o160 fan-in and 160 fan-out pins per CPM oNeeded to reduce CP FPGA input pin count o108 input streams needed per chip oThis has been the subject of the most study in prototype testing o640 (800) Mbit/s Glink output data oGlink chipset successfully used in demonstrators oNeeded some work to understand interaction with RODs
11
22nd March, 2005CPM FDR, Architecture and Challenges11 CPM challenges: latency oCP system latency budget: ~14 ticks oThis is a very difficult target oNote, CPM is only first stage of CP system oCMM needs about 5 ticks oCPM latency - irreducible oInput cables: > 2 ticks oLVDS deserialisers: ~ 2 ticks oMux/Demux to 160 MHz: ~ 1 tick oBC-demuxing algorithm: 1 tick oRemaining budget 14-5-2-2-1-1 = 3 !
12
22nd March, 2005CPM FDR, Architecture and Challenges12 Conclusions oThe CPM is a very complex module oDifficulties include: oHigh connectivity oMultiple time-domains oTight constraints on latency oLarge overall system size oExtensive testing has shown that the current prototype CPM meets these demands
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.