Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP
Beat Jost, CernTrigger Panel 16 March Architecture (Reminder)
Beat Jost, CernTrigger Panel 16 March Features qSupport of Level-1 and HLT on common infrastructure qTwo separate multiplexing layers, reflecting different data rates of L1 and LHT qCommon CPU farm for both traffics qScalable, e.g. L1 upgrade to include trackers qSeparate storage network to de-couple data flows qDriven by Trigger needs
Beat Jost, CernTrigger Panel 16 March Physical Implementation – Rack Layout in D1 qRoom for 50 racks qAll racks identical qUpto U boxes (≤46 boxes/rack) qUpto 150 subfarms (≤2-3 subfarms/rack)
Beat Jost, CernTrigger Panel 16 March Physical Implementation – Farm Rack qUpto 46 1U servers in one 59U rack q2 (3) subfarms per rack q2(3) data switches and one controls switch q1 patch-panel for external connection (total of 9 links) qUpgradeable to 3 subfarms per rack q2 Gb/s input bandwidth upgradeable 4 Gb/s
Beat Jost, CernTrigger Panel 16 March Physical Implementation – Rack Layout in D2 q5 SFC racks q1 rack for Readout Network qMany Racks for patch panels
Beat Jost, CernTrigger Panel 16 March Physical Implementation – Farm controls qBuilding Block of 9U height consisting of ã4 (upgradeable to 5) SFCs ã2 Controls PCs ã1 Patch panel ã2 spare spaces for upgrading number of subfarms ãOne storage switch aggregating the output data from the SFCs ãPatch panel for storage uplink 1 (2) GbEthernet links per crate
Beat Jost, CernTrigger Panel 16 March Plans qMid 2004 ãinstall UTP links between D3 and D2 and between D2 and D1 (Patch Panel Patch Panel) ~ 2500 links. ãFiber links between surface and underground area qEnd 2004/Beginning 2005 ãInstall basic computing infrastructure åServers in computer room åBasic links top-bottom (initially GbEthernet, later prob. 10Gb Ethernet) ãAcquisition of first basic network equipment and rudimentary CPU farm. (basically HLT system) qDuring 2005 ãCommissioning of DAQ (actually online) system qBeginning 2006 ãPreparation of acquisition of final readout network and CPU farm qMid 2006 installation of final infrastructure (farm + switch)
Beat Jost, CernTrigger Panel 16 March Open Questions qExact composition of L1 trigger ãSo far assumed VeLo, TT, L0DU ãOthers? qIs ever data from ReadOut Supervisor needed in L1? ãInformation available at L0 L0 bunch current 8 bits Bunch ID (RS) 12 bits GPS40 bits Detector status24 bits L0 Event ID24 bits Trigger type3 bits L0 Force bit1 bit Bunch ID (L0DU)12 bits BX type2 bits L0 synch error1 bit L0 synch error forced1 bit qNeed to know where (which racks) SD electronics will reside (needed for Network connections) qNeed to know event sizes (per FE board, preferably)