Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.

Similar presentations


Presentation on theme: "1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience."— Presentation transcript:

1 1 PRAGUE site report

2 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience

3 3 Experiments and people Three institutions in Prague –Academy of Sciences of the Czech Republic –Charles University in Prague –Czech Technical University in Prague Collaborate on experiments –CERN – ATLAS, ALICE, TOTEM, *AUGER* –FNAL – D0 –BNL - STAR –DESY – H1 Collaborating community 125 persons –60 researchers –43 students and PHD students –22 engineers and 21 technicians LCG Computing staff – takes care of GOLIAS (farm at IOP AS CR) and SKURUT (farm located at CESNET) –Jiri Kosina – LCG, experiment software support, networking –Jiri Chudoba – ATLAS and ALICE SW and running –Jan Svec – HW, operating system, PbsPro, networking, D0 SW support (SAM, JIM) Vlastimil Hynek – run D0 simulations –Lukas Fiala – HW, networking, web

4 4 Available HW in Prague Two independent farms in Prague –GOLIAS – Institute of Physics AS CR LCG2 (testZone - ATLAS & ALICE production), D0 (SAM and JIM installation) –SKURUT – CESNET, z.s.p.o. EGEE preproduction farm, also used for ATLAS DC Separate nodes used for GILDA (tool/interface developed at INFN to allow new users to easily use grid and demonstrate it’s power) with GENIUS installed on top of user interface –Sharing of resources D0:ATLAS:ALICE= 50:40:10 (dynamically changed when needed) GOLIAS: –80 nodes (2 CPUs each), 40 TB 32 dual CPU nodes PIII1.13GHz, 1GB RAM In July 04 bought new 49 dual CPU Xeon 3.06 GHz, 2 GB RAM (WN) –Currenlty considering, if HT should be on/off (memory, scheduler problems in older(?) kernels). 10 TB disk space, we use LVM to create 3 volumes with 3 TB, one per experiment, nfs mounted on SE. In July 04 + 30 TB disk space, now in tests (30 TB XFS NFS-exported partition. Unreliable with pre-2.6.5 kernels, newer seem reliable so far) PBSPro batch system –New server room: 18 racks, more than half empty yet, 180 kW secured input electric power GOLIAS

5 5 Available HW in Prague Skurut – located at CESNET 32 dual CPU nodes PIII 700MHz, 1GB RAM (16 LCG2 + 16 GILDA) OpenPBS batch system LCG2 installation: 1xCE+UI, 1xSE, WNs (count varies) GILDA installation: 1xCE+UI, 1xSE, 1xRB(installation in progress). WNs are manually moved to LCG2 or GILDA, as needed. Will be used for EGEE tutorial

6 6 Network connection General – Geant connection –1 Gbps backbone at GOLIAS, over 10 Gbps Metropolitan Prague backbone –CZ - GEANT 2.5 Gbps (over 10 Gbps HW) –USA 0.8 Gbps (Telia) Dedicated connection – provided by CESNET –Delivered by CESNET in Collaboration with NetherLight 1 Gbps (10 Gbps line) optical connection Golias-CERN Plan to provide the connection for other institutions in Prague –In consideration connections to FERMILAB, RAL or Taipei –Independent optical connection between the collaborating Institutes in Prague, will be finished by end 2004

7 7 Data Challenges

8 8 ATLAS - July 1 – September 21 GOLIAS jobsCPU (days) Elapsed (days) all481116531992 long (cpu>100s)237716531881 short2434.4111 SKURUT jobsCPU (days) Elapsed (days) all144615071591 long (cpu>100s)87015071554 short576.237 number of jobs in DQ: 1349 done 1231 failed = 2580 jobs, 52% number of jobs in DQ: 362 done 572 failed = 934 jobs, 38%

9 9 Local job distribution GOLIAS –not enough ATLAS jobs ALICE D0 ATLAS 2 Aug23 Aug

10 10 Local job distribution SKURUT –ATLAS jobs –usage much better

11 11 ATLAS - CPU Time PIII1.13GHz Xeon 3.06GHz hours PIII700MHz hours queue limit: 48 hours later changed to 72 hours

12 12 Statistics for 1.7.-6.10.2004 ATLAS - Jobs distribution

13 13 ATLAS - Real and CPU Time very long tail for real time – some jobs were hanging during IO operation

14 14 ATLAS Total statistics Total time used: –1593 days of CPU time –1829 days of real time

15 15 ATLAS Memory usage some jobs required > 1GB RAM (no pileup events yet!)

16 16 ALICE jobs 1.7.- 6.10. 04

17 17 ALICE

18 18 ALICE

19 19 ALICE Total statistics Total time used: –2076 days of CPU time –2409 days of real time

20 20 LCG installation LCG installation on GOLIAS – We use PBSPro. In cooperation with Peer Haaselmayer (FZK), “cookbook” for LCG2+PBSPro was created (some patching is needed) –Worker nodes – the first node installation is done using LCFGng, then immediately it is switched off –From then on everything is done manually - we find it much more convenient and transparent and manual installation guide helps. –Currently installed LCG2 version 2_2_0 LCG installation on SKURUT –almost default LCG2 installation, only with some PBS queues properties tweaking –we recently found that openpbs in LCG2 already contains required_property patch, which is very convenient for better resource management currently trying somehow to integrate this feature into PBSPro


Download ppt "1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience."

Similar presentations


Ads by Google