Presentation is loading. Please wait.

Presentation is loading. Please wait.

+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.

Similar presentations


Presentation on theme: "+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday."— Presentation transcript:

1 + discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday 30 at 14:00) Database(s) Computing WG report Paolo Valente INFN Roma

2 Computing Working Group2 4 Disks configured in RAID-6:  10 TB available  Write at 600-700 MB/s (max theoretical speed with 10 Gb interface is 1.2 GB/s) A. Gianoli

3 Computing Working Group3 7 A. Gianoli Of the 7 detector switches, one has been used for the star point OK for technical run: we need additional one for STRAW electronics barrack IRCLAV3 LAV2 LAV1 CEDAR For the full experiment:  12 detector switches  28 for the LKr readout  4 for the LKr-L0  Fill the (12 slots of the) central router with 8x10 Gb ports

4 Computing Working Group4 Network management Services and rules A. Gianoli

5 Computing Working Group5 DATA CONFIG DCS DATA CONFIG DCS Detector switches Main switch/router DATAIN DATAOUT DCS IPMI Mainz PC’s Storage PC Routing (so far) TEL62

6 Things to do Computing Working Group6 Issue also emerged in TDAQ WG  Order a new server  Similar to storage PC, less disk, maybe more RAM  Configure virtual machines for services Discussion: How to access the data?  ssh to lxplus if outside CERN, then ssh to farm machines  If significant processing is foreseen, maybe additional machines could be added by teams of sub-detectors (connected to the appropriate service) A. Gianoli

7 Computing Working Group7 G. Lamanna  Experience tell us:  We want to manage DHCP, remote boot server, NFS share, etc. by ourselves  Everything now is on the “shoulders” of the merger PC

8 Computing Working Group8 license problem of pf_ring_dna (high-speed driver for packet sending and receiving) “end of trasmission” to be implemented in farm software: J. Kunze

9 Computing Working Group9  Software implemented (and working) both using pf_ring_DNA driver alone and with libzero full library  Libzero costs more  Stick to pf_ring_DNA J. Kunze

10 Computing Working Group10  We need 1 license per each PC  In principle the processes can be restarted at each burst  For technical run we need 2-4 licenses, so we can further investigate this issue J. Kunze

11 Computing Working Group11  Star point not counted:  Buy one more HP 2910  No obvious solution:  The most serious problem  Try to find a solution in the next days G. Lamanna  10 Gb line to CERN-IT  Issue of network link reliability for connection to central DB discussed in dedicated meeting

12 Computing Working Group12 Two hours dedicated meeting…  DCS and Run Control will use existing test instance of Oracle DB for the technical run  In the meanwhile:  estimate need for the experiment in the full configuration  ask for our own Oracle instance to be managed by IT (we pay for it…)  Detectors should prepare to move all relevant information to Oracle providing schemas  Offline and analysis will use Frontier squid for accessing DB from outside  In case of performance issues, additional squid servers can be installed in off-CERN data centers G. Lamanna

13 Computing Working Group13  Strong message to the Collaboration: avoid developing “custom” databases for storing information that will be used by online or offline  Each sub-system should have a contact person (remember discussions in the software WG)  One more open issue:  Maps (from DAQ to electronics, from electronics to detector channel)  Should be part of the Oracle DB…  … but they are also needed for configuration of DAQ  Will be important in the debugging phase

14 Computing Working Group14

15 Computing Working Group15 D. Protopopescu

16 Computing Working Group16  Lots of tools… D. Protopopescu

17 Computing Working Group17 D. Protopopescu

18 Computing Working Group18 D. Protopopescu

19 Computing Working Group19 Plan production of Monte Carlo samples:  Requested number of events for each channel  Foreseen CPU time and disk occupancy of output files  Open issues:  Run also reconstruction?  Store all output to CERN CASTOR? Extend the Grid Support  Keep the same organization  Add contact person from each partecipating site  Provide weekly feedback, at least during production, from software/physics group


Download ppt "+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday."

Similar presentations


Ads by Google