1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Large Simulations using EGEE Grid for the.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
5 Sept 2006GDB meeting BNL, MIlos Lokajicek Service planning and monitoring in T2 - Prague.
Computing Jiří Chudoba Institute of Physics, CAS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Prague TIER2 Site Report
Luca dell’Agnello INFN-CNAF
NIKHEF Data Processing Fclty
Статус ГРИД-кластера ИЯФ СО РАН.
PES Lessons learned from large scale LSF scalability tests
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

1 PRAGUE site report

2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience

3 Experiments and people Three institutions in Prague –Academy of Sciences of the Czech Republic –Charles University in Prague –Czech Technical University in Prague Collaborate on experiments –CERN – ATLAS, ALICE, TOTEM, *AUGER* –FNAL – D0 –BNL - STAR –DESY – H1 Collaborating community 125 persons –60 researchers –43 students and PHD students –22 engineers and 21 technicians LCG Computing staff – takes care of GOLIAS (farm at IOP AS CR) and SKURUT (farm located at CESNET) –Jiri Kosina – LCG, experiment software support, networking –Jiri Chudoba – ATLAS and ALICE SW and running –Jan Svec – HW, operating system, PbsPro, networking, D0 SW support (SAM, JIM) Vlastimil Hynek – run D0 simulations –Lukas Fiala – HW, networking, web

4 Available HW in Prague Two independent farms in Prague –GOLIAS – Institute of Physics AS CR LCG2 (testZone - ATLAS & ALICE production), D0 (SAM and JIM installation) –SKURUT – CESNET, z.s.p.o. EGEE preproduction farm, also used for ATLAS DC Separate nodes used for GILDA (tool/interface developed at INFN to allow new users to easily use grid and demonstrate it’s power) with GENIUS installed on top of user interface –Sharing of resources D0:ATLAS:ALICE= 50:40:10 (dynamically changed when needed) GOLIAS: –80 nodes (2 CPUs each), 40 TB 32 dual CPU nodes PIII1.13GHz, 1GB RAM In July 04 bought new 49 dual CPU Xeon 3.06 GHz, 2 GB RAM (WN) –Currenlty considering, if HT should be on/off (memory, scheduler problems in older(?) kernels). 10 TB disk space, we use LVM to create 3 volumes with 3 TB, one per experiment, nfs mounted on SE. In July TB disk space, now in tests (30 TB XFS NFS-exported partition. Unreliable with pre kernels, newer seem reliable so far) PBSPro batch system –New server room: 18 racks, more than half empty yet, 180 kW secured input electric power GOLIAS

5 Available HW in Prague Skurut – located at CESNET 32 dual CPU nodes PIII 700MHz, 1GB RAM (16 LCG GILDA) OpenPBS batch system LCG2 installation: 1xCE+UI, 1xSE, WNs (count varies) GILDA installation: 1xCE+UI, 1xSE, 1xRB(installation in progress). WNs are manually moved to LCG2 or GILDA, as needed. Will be used for EGEE tutorial

6 Network connection General – Geant connection –1 Gbps backbone at GOLIAS, over 10 Gbps Metropolitan Prague backbone –CZ - GEANT 2.5 Gbps (over 10 Gbps HW) –USA 0.8 Gbps (Telia) Dedicated connection – provided by CESNET –Delivered by CESNET in Collaboration with NetherLight 1 Gbps (10 Gbps line) optical connection Golias-CERN Plan to provide the connection for other institutions in Prague –In consideration connections to FERMILAB, RAL or Taipei –Independent optical connection between the collaborating Institutes in Prague, will be finished by end 2004

7 Data Challenges

8 ATLAS - July 1 – September 21 GOLIAS jobsCPU (days) Elapsed (days) all long (cpu>100s) short SKURUT jobsCPU (days) Elapsed (days) all long (cpu>100s) short number of jobs in DQ: 1349 done 1231 failed = 2580 jobs, 52% number of jobs in DQ: 362 done 572 failed = 934 jobs, 38%

9 Local job distribution GOLIAS –not enough ATLAS jobs ALICE D0 ATLAS 2 Aug23 Aug

10 Local job distribution SKURUT –ATLAS jobs –usage much better

11 ATLAS - CPU Time PIII1.13GHz Xeon 3.06GHz hours PIII700MHz hours queue limit: 48 hours later changed to 72 hours

12 Statistics for ATLAS - Jobs distribution

13 ATLAS - Real and CPU Time very long tail for real time – some jobs were hanging during IO operation

14 ATLAS Total statistics Total time used: –1593 days of CPU time –1829 days of real time

15 ATLAS Memory usage some jobs required > 1GB RAM (no pileup events yet!)

16 ALICE jobs

17 ALICE

18 ALICE

19 ALICE Total statistics Total time used: –2076 days of CPU time –2409 days of real time

20 LCG installation LCG installation on GOLIAS – We use PBSPro. In cooperation with Peer Haaselmayer (FZK), “cookbook” for LCG2+PBSPro was created (some patching is needed) –Worker nodes – the first node installation is done using LCFGng, then immediately it is switched off –From then on everything is done manually - we find it much more convenient and transparent and manual installation guide helps. –Currently installed LCG2 version 2_2_0 LCG installation on SKURUT –almost default LCG2 installation, only with some PBS queues properties tweaking –we recently found that openpbs in LCG2 already contains required_property patch, which is very convenient for better resource management currently trying somehow to integrate this feature into PBSPro