1 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de.

Slides:



Advertisements
Similar presentations
Polish Tier-2 Federation Resource pledges for WLCG Nominal WAN (Mbits/sec) Disk (Tbytes) CPU (kSI2k)
Advertisements

TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Project Status David Britton,15/Dec/ Outline Programmatic Review Outcome CCRC08 LHC Schedule Changes Service Resilience CASTOR Current Status Project.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
GridPP: Executive Summary Tony Doyle. Tony Doyle - University of Glasgow Oversight Committee 11 October 2007 Exec 2 Summary Grid Status: Geographical.
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum.
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft LCG-POB, , Reinhard Maschuw1 Grid Computing Centre Karlsruhe - GridKa Regional/Tier.
Polish Tier-2 Ryszard Gokieli Institute for Nuclear Studies Warsaw.
GridKa January 2005 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann 1 Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
INFSO-RI Enabling Grids for E-sciencE Sven Hermann, Clemens Koerdt Forschungszentrum Karlsruhe ROC DECH Report.
Connect communicate collaborate LHCONE – Linking Tier 1 & Tier 2 Sites Background and Requirements Richard Hughes-Jones DANTE Delivery of Advanced Network.
Update on Status of Tier-1 GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
IN2P3 Computing Center, Lyon, France Tier1 for Atlas, ALICE, CMS, LHCb
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
GridPP From Prototype to Production David Britton 21/Sep/06 1.Context – Introduction to GridPP 2.Performance of the GridPP/EGEE/wLCG Grid 3.Some Successes.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Helmut Dres, Institute For Scientific Computing – GDB Meeting Global Grid User Support.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
1 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft WLCG Collaboration Workshop, Jan. 24th 2007 Report Tier-1 + associated Tier-2s Andreas Heiss.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
The production deployment of IPv6 on WLCG David Kelsey (STFC-RAL) CHEP2015, OIST, Okinawa 16 Apr 2015.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
WLCG Service Report ~~~ WLCG Management Board, 24 th November
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
WLCG Service Report ~~~ WLCG Management Board, 1 st September
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
“A Data Movement Service for the LHC”
Dirk Duellmann CERN IT/PSS and 3D
LCG Service Challenge: Planning and Milestones
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
RDIG for ALICE today and in future
Tier-1 + associated Tier-2s
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

1 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Holger Marten Holger. Marten at iwr. fzk. de Tier-2 cloud

2 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites)

3 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 region DECH LHCb CMS Alice Atlas 1000 SI2k

4 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 atlas cms lhcb alice GridKa

5 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Tier-2s associated with GridKa (The WLCG GridKa cloud) NameLocationAliceAtlasCMSLHCb CH / CSCSMannoXXX Czech R./FZUPragueXX D / DESYDESY Hamburg + Zeuthen X D / CMS-Fed.DESY Hamburg + Zeuthen, RWTH Aachen X D / GSIGSI DarmstadtX D / Atlas-Fed.Munich MPG + TUX Polish Tier-2 Federation Cracow, Poznan, Warsaw XXXX RU / RDIGFederation (8+?)X Candidates: AustriaInnsbruck, ViennaX D / U MünsterMünsterX D/ U FreiburgFreiburg

6 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Tested FTS channels GridKa Tier-0 / 1 / 2 (not sure that this is up to date) Tier-0 FZK CERN - FZK FZK Tier-1 IN2P3 - FZK PIC - FZK RAL - FZK SARA - FZK TAIWAN - FZK TRIUMF - FZK BNL - FZK FNAL - FZK INFNT1 - FZK NDGFT1 - FZK FZK Tier-2 FZK - CSCS FZK - CYFRONET FZK - DESY FZK - DESYZN FZK - FZU FZK - GSI FZK - ITEP FZK - IHEP FZK - JINR FZK - PNPI FZK - POZNAN FZK - PRAGUE FZK - RRCKI FZK - RWTHAACHEN FZK - SINP FZK - SPBSU FZK Tier-2 (cont.) FZK - TROITSKINR FZK - UNIFREIBURG FZK - UNIWUPPERTAL FZK - WARSAW

7 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Non-associated Tier-2s accessing data at GridKa (taken from the Megatable) 9 European sites 7 U.S. sites 5 from far East + 3 additional candidates all CMS (see CMS computing model) They will be served through FTS STAR-channels.

8 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Transfer rates for GridKa according to Megatable T0 T MB/s T1 T1 in MB/s T1 T1 out MB/s T2 T1 84,4 MB/s average MB/s peak T1 T MB/s average MB/s peak 10 Gbps dedicated GridKa – CERN + 10 Gbps GridKa – CNAF failover 10 Gbps GridKa – CNAF 10 Gbps GridKa – SARA/NIKHEF 10 Gbps GridKa – IN2P3 10 Gbps GridKa Internet 1 Gbps GridKa – Poland 1 Gbps GridKa – Czech R. Disk and Tape requirement for GridKa acc. to Magatable is o.k. (balance slightly positive) Is that correct? D/CMS gives 8 MB/s average but 202 MB/s peak !

9 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Deployed services for Tier-2s usual T1 site services (CEs, SE, BDIIs, VO-Boxes …) top level BDII RB FTS (see overview of tested channels) 3D Oracle & Squid data bases deployed (3rd machine for Atlas soon) LFC (yet MySQL, to be migrated to Oracle DB But not always sure about usage of RB, top level BDII, … by other sites. General trends at GridKa to virtualize services on redundant + reliable hardware run DNS round-robin for load balancing

10 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Examples from the last Service Challenges

11 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Data transfers November 2006 Hourly averaged dCache I/O rates and tape transfer rates achieved 477 MB/s peak (1hour average) data rate. >440 MB/s during 8 hours (T0T1 + T1T1) > 200 MB/s to tape achieved with 8 LTO3 drives. Higher tape throughput already in October 2006

12 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Gridview T0FZK Plots for Nov th high CMS transfer rates > 200 MB/s

13 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Multi-VO transfers December 06 Target: Alice 24MB/s, Atlas 83.3 MB/s, CMS 26.3 MB/s SUM: 134 MB/s CMS disk-only pools at FZK full. LFC down FTS failed RED = ATLAS Its possible but still needs reliability as everywhere…

14 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Atlas DDM tests: Tier-1 + Tier-2 cloud Participating Tier-2s: DESY-HH, DESY-ZN, Wuppertal, FZU, CSCS, Cyfronet 3 steps functional tests: 1. 1 dataset subscribed to each Tier-2 + one add. dataset to all Tier-2s 100% files transferred 2.2 datasets to each Tier-2 Problem w/ Atlas VO at Wuppertal, few replication failures. 3.1 dataset in each Tier-2 subscribed to GridKa 100% files transferred. Parallel subscription of datasets (few 100 GBs) to all Tier-2s. (Dec. 06) Throughphut tests to be done!

15 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 CMS T2 Desy-Aachen Federation significant contributions to CMS SC4 and CSA06 challenges stable data transfers transferred 55 TB to DESY/Aachen disk within 45 days, 45 TB to DESY tape Aachen CMS muon and computing groups successfully demonstrated full grid-chain from data taking at T0 to user analysis at T2 for the first time. 14% of total CMS grid MC production 2007/2008: MC prod. / Calib. in Aachen, MC prod. and user analysis at Desy Significant upgrade of resources Further improve cooperation between German CMS centers (including Uni KA and GridKa)

16 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Polish Federated Tier-2 3 computing centres, each supporting mainly one experiment: Kraków-Atlas, LHCb Warsaw-CMS, LHCb Poznań-Alice connected via Pionier academic network 1Gb/s p2p network link to GridKa in place successful participation in Atlas SC4 T1T2 tests: - Up to 100 MB/s transfer rates from Krakow to GridKa, 50% slower in other direction % file transfer efficiency 1000 kSI2k CPU and 250 TB disk will be provided by Polish Tier-2 Federation at LHC startup.

17 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 FZU Prague Nr. of ATLAS jobs submitted to Golias CPU equivalent usage – average number of CPUs used continuously Successfull participation in Atlas DDM tests!

18 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 The GridKa cloud - How do we communicate (examples) dedicated Tier-2 and experiment contact at GridKa (A. Heiss) GridKa – Tier-2 meeting in Munich in Oct GridKa contrib. to Polish federation meeting in Feb German Tier-2 representative in GDB Tier-2 participation in face-to-face meetings of GridKa TAB several experiment specific meetings with Tier-2 participation …

19 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 GridKa upgrades 2007 …

20 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft Pre-GDB, Prague, Apr. 3rd 2007 Upgrades in 2007 Install additional CPUs (April) LHC experiments: 1027 kSI2k kSI2k= 1864 kSI2k non-LHC experiments: 1060 kSI2k kSI2k= 1270 kSI2k Add tape capacity (April) LHC experiments: 393 TB TB= 1007 TB non-LHC experiments:545 TB + 40 TB= 585 TB Add disk capacity (July) LHC experiments:284 TB TB= 878 TB (usable) Non-LHC experiments353 TB + 90 TB= 443 TB (usable) Completed on Monday, April 2nd Completed but needs some hardware maintenance for new drives Installation / allocation started 2007: LHC experiments will have biggest fraction of the GridKa resources! 2007: LHC experiments will have biggest fraction of the GridKa resources!