Site Report: Prague Jiří Chudoba 19.9.2006 Institute of Physics, Prague WLCG GridKa+T2s Workshop.

Slides:



Advertisements
Similar presentations
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
FZU participation in the Tier0 test CERN August 3, 2006.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
, Prague JAN ŠVEC Institute of Physics AS CR.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
Klaster obliczeniowy WLCG – cz.I Alice::WTU::LCG - skład: VOBOX  alicluster.if.pw.edu.plVM: saturn.if.pw.edu.pl CREAM-CE  aligrid.if.pw.edu.pl VM: saturn.if.pw.edu.pl.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
5 Sept 2006GDB meeting BNL, MIlos Lokajicek Service planning and monitoring in T2 - Prague.
Computing Jiří Chudoba Institute of Physics, CAS.
INFSO-RI Enabling Grids for E-sciencE ATLAS DDM Operations - II Monitoring and Daily Tasks Jiří Chudoba ATLAS meeting, ,
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Recovery of Lost Files Jiří Chudoba Institute of Physics, Prague.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Prague TIER2 Site Report
LCG Deployment in Japan
Update on Plan for KISTI-GSDC
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop

Grid services 2 independent sites  praguelcg2 (golias) – Tier2 for ATLAS and ALICE  prague_cesnet_lcg2 (skurut) Farm skurut uses old hw 33 dual nodes) and is used mostly for testing. Although its contribution to the Atlas production in the last year achieved a few percent, it will not be considered in the following slides.

Golias farm 250 CPUs as WN (published) mostly HP ProLiant DL 140 2x XEON 3,06 GHz, 2 – 4 GB RAM, 80 GB ATA HDD newer HP blade servers with dual-core Opterons GHz, 4 GB RAM older HP LP1000r 2x PIII 1,13 GHz, 1GB RAM, 18GB SCSI HDD 40 TB (raw) disk space 1 Gbps optical link to CESNET CE, PBSPro, SE: classical + DPM, voboxes (ALICE, ATLAS), LFC (ALICE), BDII gLite installed (only CE is LCG 2.7) supported experiments: LHC (ATLAS, ALICE) – 100 CPUs, D0 – 150 CPUs, AUGER,... manpower: 5 administrators (with many other duties)

Productions during summer 2006 D0 – less activity ATLAS  mostly users’ jobs ALICE  managed by Dagmar Adamova  vobox, 60 – 120 jobs, outputs sent to CERN (xrootd support for DPM planned, current status ???)

Performance Issues PBS server stability often crashed, correlated with a number of jobs in queues (D0 sends thousands of jobs) WN  ALICE RAM requirements, only some nodes can be used one misbehaved job affects other jobs on the same node  ATLAS non-production grid jobs often stucked DPM only a single node, another disk server will be added

Data Transfers, SC4 tests since April ATLAS Tier2’s pretest June 6 – 9  some notes on wiki: ATLAS Tier0 test June – July many performance problems for transfers from CERN to Prague via FZK, succeeded for some datasets

Tier1- Tier2 Data Transfers Problems OK for a few small files does not work for many big files  no improvement since this spring 1 file, 1 GB, done in 65s 50 files (10 in parallel), 1GB each

FTS issues FZK-FZU, FZU-FZK channels replaced by STAR-FZU, STAR-FZK  Tier2’s cannot change settings for uploads to their Tier1 FTS monitoring  we need access to log files, results of other transfers, current status  example from SARA:

Planned activities current ATLAS DQ2 functional test files did not succeeded to reach FZU ATLAS repeated Tier0 test another 2TB of disk space connected to DPM as another pool expected soon local ATLAS data management deletion of old files replication of active files to Tier1 no tools yet