Presentation is loading. Please wait.

Presentation is loading. Please wait.

EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.

Similar presentations


Presentation on theme: "EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group."— Presentation transcript:

1 EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group

2 EPS 2007 Alexander Oh, CERN 2 DAQ

3 EPS 2007 Alexander Oh, CERN 3 DAQ requirements and Architecture Collision rate40 MHz Two Level Trigger System Level-1 Maximum trigger rate 100 kHz Average event size 1 MByte HLT acceptance1-10% No. of In-Out units 500 Network aggregate Throughput 1 Terabit/s Event filter computing power5 10 6 MIPS Data productionTbyte/day No. of PC3000

4 EPS 2007 Alexander Oh, CERN 4 Two-Stage Event Builder First stage –Combine 8 fragments into 1 super-fragment Second stage –Combine 72 super- fragments into 1 event

5 EPS 2007 Alexander Oh, CERN 5 Two-Stage Event Builder ~600 fragments,  ~2kB 72 s-fragments,  ~16kB 1 event,  ~1MB Sizes & Multiplicities

6 EPS 2007 Alexander Oh, CERN 6 Two-Stage Event Builder Super Fragment Builder Myrinet NIC –Common technology in high performance computing –2 bi-directional optical links, 2Gb/s –200m cable from cavern to DAQ PCs –RISC processor custom programmed Cross-Bar Switches 32x32 –Wormhole routing –Low latency –Link level flow control with backpressure –Loss-less transmission –Switch not buffered Head of Line blocking

7 EPS 2007 Alexander Oh, CERN 7 Two-Stage Event Builder Super Fragment Builder Myrinet NIC Cross-Bar Switches

8 EPS 2007 Alexander Oh, CERN 8 Performance Super Fragment Builder 500 MB/s line speed two rails 200 MB/s needed 300 MB/s @ 2kB loss due to head-of-line blocking Working point Test with different fragment distributions.

9 EPS 2007 Alexander Oh, CERN 9 Two-Stage Event Builder Event Builder PCs act as “Intelligent Buffers” –PCI-E Intel based NIC with 4 GbE ports –Myrinet input from Super Fragment Builder –GB Ethernet to assemble event and output to filter-farm –Protocol is TPC/IP Gb Ethernet Switches –2x Switch Force 10 E1200 –90 GbE port line cards –4 ports per machine

10 EPS 2007 Alexander Oh, CERN 10 Two-Stage Event Builder Event Builder PC model: Dell PE2950 with 2 dual-core CPU ~700 PC installed Gb Ethernet Switches: Force 10 E1200

11 EPS 2007 Alexander Oh, CERN 11 Performance Event Builder First test with final Hardware –64 Input PCs x 64 Output PCs, 4GBE ports per PC –Super fragments generated in the Input –Events dropped in the Output –Constant fragment size requirement Requirement well fulfilled!

12 EPS 2007 Alexander Oh, CERN 12 Two-Stage Event Builder Two Stage Event Builder allows a staged installation: –At each stage the full functionality is provided. –The performance is scalable as a function of available hardware. The number of Event Builders can be varied from 1 to 8 –Scalability: from 1/8th to 1/1th of the final performance –Reliability: Event Builders are functionally redundant.

13 EPS 2007 Alexander Oh, CERN 13 DAQ Configurations The configuration of the DAQ adopted to the commissioning schedule –Now: first global runs with some sub-detectors –Sep ‘07: Global DAQ commissioning Full Read-Out, 4 Event Builder, reduced Filter Farm –Nov ‘07 - Apr ‘08: Global Runs with technical and cosmic triggers Full Read-Out, 2 Event Builder, reduced Filter Farm –Jul ‘08 Physics Run: Full Read-Out, 4 Event Builder, nominal Filter Farm –Later:High Luminosity Runs

14 EPS 2007 Alexander Oh, CERN 14 12.5 kHz+12.5 kHz Structure Data to Surface COMPLETED 100% 676 FED, 512 FRL, 1536 links, 72 FED Builders Data to Surface COMPLETED 100% 676 FED, 512 FRL, 1536 links, 72 FED Builders 640 PCs PE2950: 2x2core 2GHz Xeon 4 GB Ultimate function: EVB-RU at 100 kHz (8 DAQ slices) 640 PCs PE2950: 2x2core 2GHz Xeon 4 GB Ultimate function: EVB-RU at 100 kHz (8 DAQ slices) Installed 100 kHz EVB, ReadOut only (2007) More on commissioning: “Status and Commissionning of the CMS Experiment” Claudia Wulz, today 11h15

15 EPS 2007 Alexander Oh, CERN 15 Run Control

16 EPS 2007 Alexander Oh, CERN 16 Requirements & Architecture Run Control tasks –Configure –Control –Monitor –Diagnostic –Interactivity Framework provides uniform API to common tasks –DB for configuration –Statemachines for control –Access to monitoring system Objects to manage ~10000 distributed Applications on ~2000 PCs

17 EPS 2007 Alexander Oh, CERN 17 Architecture The experiment is controlled by a tree of Finite State Machines. “Function Managers” implement the finite state machine. A set of services support the Function Managers.

18 EPS 2007 Alexander Oh, CERN 18 –SECURITY SERVICE login and user account management; –RESOURCE SERVICE (RS) information about DAQ resources and partitions; –INFORMATION AND MONITOR SERVICE (IMS) Collects messages and monitor data; distributes them to the subscribers; –JOB CONTROL Starts, monitors and stops the software elements of RCMS, including the DAQ components; Architecture Services 18

19 EPS 2007 Alexander Oh, CERN 19 Run Control Implementation & Technologies Implementation in Java as a Web Application –Java 1.5.0 –Application Server Tomcat 5.5.20 –Development Tool: Eclipse IDE Data Base Support –Oracle 10g –MySQL 5 Web Service Interfaces –WSDL specified using Axis –Tested Clients: Java, Perl, LabView

20 EPS 2007 Alexander Oh, CERN 20 Run Control Framework Resource Service –Stores and delivers configuration information of Online Processes Function Manager Framework –Finite State Machine Engine –Error Handlers –Process facades GUI –Generic JSP based GUI –Basic command and state display. Log Collector –Collect, store and forward log4c and log4j messages Job Control –Start, Monitor and Stop Unix processes

21 EPS 2007 Alexander Oh, CERN 21 Run Control Deployment –10 PC allocated to Run Control Run on average two instances of tomcat DNS alias makes relocation transparent (load balancing) –One common Online DB shared with all online processes. Account separation per sub- system Configuration management with Global Configuration Keys Final machines are installed and are being used for the July “Global Run” csc daq dt ecal es hcal pixel rpc top tracker trigger dqm RC PC Oracle 10g

22 EPS 2007 Alexander Oh, CERN 22 Run Control In Use Run Control has been used successfully for taking data with cosmic muons during the magnet test (MTCC) and during Local and Global Runs for CMS comissioning. More on MTCC: “ First Cosmic Data Taking with CMS ” Hannes Sakulin, today 11h30

23 EPS 2007 Alexander Oh, CERN 23 Run Control In Use Typical Times to start a “Run” in the MTCC-II

24 EPS 2007 Alexander Oh, CERN 24 Finis DAQ –Two stage trigger (Lvl-1 and HLT) require 100GB/s event builder –Two Stage Event Building provides a flexible design –Hardware has been installed and is being commissioned Run Control –Hierarchical State Machine Model –Build around Web technologies –In production at Test Beams and Global Runs

25 EPS 2007 Alexander Oh, CERN 25 EXTRA

26 EPS 2007 Alexander Oh, CERN 26 Architecture The FMs are grouped by level in the control hierarchy. Level 0 is the entry point to the experiment. Level 1 is the entry point to the sub-system and is standardized Additional Levels are optional. Web Browser (GUI) Level 0 FM Level 1 FM Level 2 FM User interaction with Web Browser connected to Level 0 FM. Level 0 FM is entry point to Run Control System.. Level 2 FMs are sub- system specific custom implementations. Level 1 FM interface to the Level 0 FM and have to implement a standard set of inputs and states.

27 EPS 2007 Alexander Oh, CERN 27 Structure Sep 07 (Global DAQ commissioning) 640 PC 2x2core 2GHz Xeon 4 DAQ slices (72x72). 288 RU x 288 BUFU Maximum Level1 rate 50 kHz HLT PC event rate ~ 180 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s Sep 07 (Global DAQ commissioning) 640 PC 2x2core 2GHz Xeon 4 DAQ slices (72x72). 288 RU x 288 BUFU Maximum Level1 rate 50 kHz HLT PC event rate ~ 180 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s

28 EPS 2007 Alexander Oh, CERN 28 Structure Nov 07 - Apr 08 (global runs: technical, cosmic) 640 PC 2x2core 2 GHz Xeon 2 DAQ slices (72x200). 144 RU x 400 BUFU Maximum Level1 rate 20 kHz HLT PC event rate ~ 50 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s Nov 07 - Apr 08 (global runs: technical, cosmic) 640 PC 2x2core 2 GHz Xeon 2 DAQ slices (72x200). 144 RU x 400 BUFU Maximum Level1 rate 20 kHz HLT PC event rate ~ 50 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s

29 EPS 2007 Alexander Oh, CERN 29 Structure July 08 (physics run) 4 DAQ slices (72x288). Maximum Level1 rate 50 kHz HLT PC event rate ~ 40 Hz >2009. High Luminosity runs 8 DAQ slices 100 kHz July 08 (physics run) 4 DAQ slices (72x288). Maximum Level1 rate 50 kHz HLT PC event rate ~ 40 Hz >2009. High Luminosity runs 8 DAQ slices 100 kHz

30 EPS 2007 Alexander Oh, CERN 30 Architecture Function Manager The FM controls a set of resources –Classic definition: A control system determines its outputs depending on its inputs. –Resources are online processes implemented in C++ FM Components: –Input Module –Output module –Data Access Module: fetch configuration data –Processor module: process incoming messages –Finite State Machine Standardization –Sub-detectors implement a defined Finite State Machine –Facilitates integration

31 EPS 2007 Alexander Oh, CERN 31 DAQ Event Building Sources Destinations Event fragments : Event data fragments are stored in separated physical memory systems. Event builder : Physical system interconnecting data sources with data destinations. It has to move each event data fragments into a same destination. Full events : Full event data are stored into one physical memory system associated to a processing unit.

32 EPS 2007 Alexander Oh, CERN 32 Trigger Level-1 Level-1 trigger: reduce 40 MHz to 10 5 Hz –Upstream: still need to get to 10 2 Hz Front end pipelines Readout buffers Processor farms Switching network Detectors Lvl-1 HLT Lvl-1 Lvl-2 Lvl-3 Front end pipelines Readout buffers Processor farms Switching network Detectors “Traditional”: 3 physical levelsCMS: 2 physical levels


Download ppt "EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group."

Similar presentations


Ads by Google