Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Status of the DESY Grid Centre Volker Guelzow & the Grid Crew DESY IT Hamburg, April 28th.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
15/07/2010Swiss WLCG Operations Meeting Summary of the last GridKA Cloud Meeting (07 July 2010) Marc Goulette (University of Geneva)
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
UKI-SouthGrid Overview Face-2-Face Meeting Pete Gronbech SouthGrid Technical Coordinator Oxford June 2013.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Organisation Management and Policy Group (MPG): Responsible for setting and policy decisions and resolving any issues concerning fractional usage, acceptable.
Astroparticle Physics at DESY IceCube CTA Recommendations.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
21 October 2010 Dietrich Liko Grid Tier-2 HEPHY Scientific Advisory Board.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
DESY Photon Science XFEL official start of project: 5 June 2007 FLASH upgrade to 1 GeV done, cool down started PETRA III construction started 2 July 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
6. Juli 2015 Dietrich Liko Physics Computing 114. Vorstandssitzung.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Overview of ATLAS Israel Computing RECFA, April Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University,
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
DESY. Status and Perspectives in Particle Physics Albrecht Wagner Chair of the DESY Directorate.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Status of WLCG FCPPL project
Update on Plan for KISTI-GSDC
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
Grid Computing and the National Analysis Facility
ETHZ, Zürich September 1st , 2016
Collaboration Board Meeting
Presentation transcript:

Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011

Volker Guelzow | PRC | | Page 2 Outline > The 2011 DESY Grid centre status > Requests and Pledges for WLCG > The > PRC relevant > The German Tier 2 situation > Summary

Volker Guelzow | PRC | | Page 3 DESY Grid Centre usage by VO

Volker Guelzow | PRC | | Page 4 The German T2‘s (Atlas, CMS, LHCb) Pledges for Germany CPU 2012 [HS] CPU 2012 [% Tot] Disk 2012 [TB] Disk 2012 [% Tot] A DESY C Aachen/DESY A Goettingen A Munich A FR/Wupp L DESY Summe CMS Summe Atlas SumDESY SumNonDESY Grand Tot Atlas-T GrandTot CMS- T

Volker Guelzow | PRC | | Page 5 Atlas delivered vs. pledged CPU

Volker Guelzow | PRC | | Page 6 CMS delivered vs pledged computing

Volker Guelzow | PRC | | Page 7 DESY T2 availability/reliability from WLCG monitoring

Volker Guelzow | PRC | | Page 8 ATLAS: DESY and German Cloud contributions (1) > DESY (HH+ZN together) competing with ATLAS T1’s > All jobs since :

Volker Guelzow | PRC | | Page 9 ATLAS: DESY and German Cloud contributions (2) > Excellent reliability: > ATLAS GangaRobot hourly tests: Expect 95% reliability  Aug&Sept 2011, all ATLAS T2

Volker Guelzow | PRC | | Page 10 CMS Tier-2 Contribution Volume of subscribed data sets at Tier-2s TB “Unregistered” data DESY[TB]RWTH[TB] /store/user 290*113 /store/group *580TB brutto FNALLPC=US “interactive” analysis facility Analysis & MC production jobs since June 2011 Still large failure fraction - Wasting ~20% of resources - Mainly bad user jobs - Very good performance - DESY is a very important CMS Tier-2 - Very good performance - DESY is a very important CMS Tier-2 Subscribed Phedex data TB

Volker Guelzow | PRC | | Page 11 NAF usage > NAF well utilized. > Operation very smooth and reliable since last PRC > UHH/CMS strong user (own purchases!) CPU usage Lustre usage

Volker Guelzow | PRC | | Page 12 NAF CPU resources > Total (with recent purchases)  41.3 kHS06  3000 Slots  Huge contribution UniHH/CMS > Detail on purchases in 2011  7.8 kHS06/576 cores: 0.5 replacement, 0.5 enlargement beginning 2011 (from DESY budget)  3 kHS06/192 cores: mostly replacement in Zeuthen recently (from DESY budget)  9.2 kHS06/672 cores: upgrade in Hamburg recently (from DESY budget)  9.2 kHS06/672 cores: upgrade in Hamburg recently (from UniHH/CMS group!) > Plan for 2012  Await technology change end 2011 (AMD Interlagos) / 2012Q1 (Intel SandyBridge), planning end of 2012Q1  In 2012: Plan for additional 10 kHS06 NAF purchases given to experiments according to the key 4 ATLAS / 2 CMS / 1 ILC / 1 LHCB

Volker Guelzow | PRC | | Page 13 Datatransfer from Storage Element (1 week sample)

Volker Guelzow | PRC | | Page 14 Future plans for the NAF > Replacement of Lustre file system  call for tenders ongoing > Strengthen NAF reliability, availability and performance  reduce impact on the HH-ZN WAN network connection  make two NAF parts more independent > New features for NAF, requested by users  e.g. Fast X-Connection, collaborative tools, … > Hardware enlargement plans: See slide “NAF CPU resources”

Volker Guelzow | PRC | | Page 15 PRC relevant > Helmholtz Alliance until end 2012 > Tier 1 for Icecube and Computing for CTA in Zeuthen > LHCone: Initiative to connect T2‘s and T3‘s, needs more experience and monitoring for further decisions > European Middleware initiative (dCache) until mid 2013 > Large Scale Datamanagement project starting in 2012 > Active role in HPC- initiative of the HGF (may end in funding)

Volker Guelzow | PRC | | Page 16 The German Tier 2 Situation > Funding of the DESY secured through base investment > Funding of Tier secured trough base&capital investment > Funding of ½ Tier secured > Funding of Universities‘ Tier 2: Invest only secured until mid 2012 Different ideas are under debate, DESY may be asked to act as a coordinator

Volker Guelzow | PRC | | Page 17 Summary > The DESY-Grid-Centre again was very reliable > The DESY T2 Centre is larger than some T1‘s centres > The DESY T2 is very attractive because of the available data > The NAF was used intensively and nationwide > The NAF is in the process of further development, major upgrades took place and are planned > dCache is still the major working horse for LHC datamanagement and DESY has the lead > DESY is very active in attracting 3rd party money > There is still an open financing issue for the Universities‘ Tier 2 after end 2012