Computing Jiří Chudoba Institute of Physics, CAS.

Slides:



Advertisements
Similar presentations
Cloud Storage in Czech Republic Czech national Cloud Storage and Data Repository project.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
, Prague JAN ŠVEC Institute of Physics AS CR.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE Meeting, LBL Berkeley (USA) LHCONE Application Pierre Auger Observatory 1-2 June.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
5 Sept 2006GDB meeting BNL, MIlos Lokajicek Service planning and monitoring in T2 - Prague.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Pierre Auger Observatory Jiří Chudoba Institute of Physics and CESNET, Prague.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
COMPUTING for ALICE at WLCG TIER-2 SITE in PRAGUE in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
GRID OPERATIONS IN ROMANIA
INFN Computing infrastructure - Workload management at the Tier-1
Prague TIER2 Site Report
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Update on Plan for KISTI-GSDC
RDIG for ALICE today and in future
News and computing activities at CC-IN2P3
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
This work is partially supported by projects InterExcellence (LTT17018), Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1.
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Computing Jiří Chudoba Institute of Physics, CAS

Projects and resources  CERN, LHC: ATLAS and ALICE  FERMILAB: D0, NOVA  Pierre Auger Observatory, Belle II, Cherenkov Telescope Array  Main resources – INGO projects MEYS – Czech Academy of Sciences: Institute of Physics, Nuclear Physics Institute Infrastructure, operational costs – CESNET (Czech NREN) Networking, 1 grid site, NGI central services – CU and CTU Tier3 clusters

Tier2 for LHC  Location: IoP (Prague) and NPI (Rez) – NPI contributes by several disk servers dedicated to ALICE  Server room – 62 m2, 400 kVA UPS, 350 kVA diesel generator, 108 kW air cooling capacity, 176 kW water cooling capacity – upto 20 racks, 8 connected to water cooling system

Golias farm  Several subclusters, transparent for users  1 batch system (torque/maui)  UI, CREAM CE, BDII, APEL,... – virtualized infrastructure (virt01, 02, 03)  home directories – Optima disk array (9 TB total)  Sublusters under warranty: – aplex01 – aplex32 asus servers in iDataPlex rack, 12/2014, HS06, 1072 jobslots – malva01 – malva12 supermicro twin servers, 12/2013, 4220 HS06, 384 jobslots

Subclusters 43 kHS06 77 kW

Storage  DPM servers for ATLAS, PAO, CTA, BelleII – 12 servers plus headnode – 2.5 PB for ATLAS, 200 TB for PAO  xrootd servers for ALICE – 4 servers at IOP plus 4 servers at NPI – 1 PB – plus TB soon

Network FZU – FZK link LHCONE link

WLCG Pledges ATLAS + ALICE: HS06, 2500 TB – without local jobs and LOCALGROUPDISK (ATLAS) – fulfilled every month from 137% to 230% usage of unsupported hw, minimal downtimes, lower usage by NOvA February 2015 Pledges for 2015: – ATLAS: HS06, 1600 TB – ALICE: 3500 HS06, 1030 TB

Operations ASAP - ATLAS Site Availability Performance DE Cloud – last 3 months

Manpower for Tier2  3 full time system administrators at FZU +.3 FTE from NPI for Tier2 – 2 administrators left in 2014, they were replaced – big competition for IT experts in Prague – other administrators for non-HEP clusters – supported by other personnel  2 physicists as experiment contacts, management

Other resources - CESNET  CESNET – Czech NREN – also National Grid Initiative – resources at several towns – Metacentrum: cores 400 cores in EGI grid Belle CTA Auger CESNET EGI cluster - Statistics for 2014

CESNET Storage  Metacentrum – DPM – 180 TB (Auger, Belle, CTA)  Data Storages – regional development – 21 PB – dCache – 30 TB disk space, about 100 TB of tape space – ATLASLOCALGROUPTAPE – we can negotiate more Pilsen: 500 TiB disks, 3.6 PiB MAID, 4.8 PiB tapes Jihlava: 804 TiB disks, 2.5 MAID. 3.7 PiB tapes Brno: 500 TiB disks, 2.1 MAID, 3.5 PiB tapes MAID = massive array of idle drives

IT4I  National Supercomputing project in Ostrava  Anselm (small cluster) – 4000 cores  Salomon (big cluster) in May 2015 – top100 – Intel Xeon cores – Intel Phi cores – 2 PB on disks, 3 PB on tapes – 10 MEuro + VAT (plus 9 MEuro for the building)  ATLAS tested access for backfilling – ARC CE (European solution) – Panda – US solution  Need more manpower from the ATLAS side Image source: IT4I

Outlook  Tier2 as the main and stable resource – Investment money promised by MYES for 2016  Possible bigger Tier3 capacity at UK – New server room built  HPC program – IT4I  Cloud resources (CESNET involved in EU programs) – Now more suitable for smaller projects