Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Estonian Grid Mario Kadastik On behalf of Estonian Grid Tallinn, Jan '05.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
23 June Strategy Proposal Heinz Stockinger on behalf of the Executive Board SwiNG Assembly Meeting Berne, 23 June 2008.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
HPCN Taskforce Status and Information Peter Kunszt Swiss Grid Initiative Meeting Bern May 7, 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
Main title ERANET - HEP Group info (if required) Your name ….
Main title HEP in Greece Group info (if required) Your name ….
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE – paving the way for a sustainable infrastructure.
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
Mcs/ HPC challenges in Switzerland Marie-Christine Sawley General Manager CSCS SOS8, Charleston April,
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Computing Jiří Chudoba Institute of Physics, CAS.
BalticGrid-II Project EGEE’09, Barcelona1 GRID infrastructure for astrophysical applications in Lithuania Gražina Tautvaišienė and Šarūnas Mikolaitis Institute.
1 Volunteer Computing at CERN past, present and future Ben Segal / CERN (describing the work of many people at CERN and elsewhere ) White Area lecture.
LHC Computing, CERN, & Federated Identities
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
INFSO-RI Enabling Grids for E-sciencE EGEE general project update Fotis Karayannis EGEE South East Europe Project Management Board.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
J. Templon Nikhef Amsterdam Physics Data Processing Group “Grid” Computing J. Templon SAC, 26 April 2012.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Grid site as a tool for data processing and data analysis
Collaboration Board Meeting
The LHCb Computing Data Challenge DC06
Presentation transcript:

Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009

Christoph Grab, ETH 2 Global Grid Community The global GRID community …

Christoph Grab, ETH 3 Grids in Switzerland (Just Some) Concentrate on HEP… Concentrate on HEP…

Christoph Grab, ETH 4 WLCG - Hierarchy Model … Swiss LHCb Tier-3ATLAS Tier -3CMS Tier-3

Status of the Swiss Tier-2 Regional Centre

Christoph Grab, ETH 6 Swiss Tier-2: Facts and Figures (1) The Swiss Tier-2 is operated by a collaboration of CHIPP and CSCS (Swiss Centre of Scientific Computing of ETHZ), located in Manno (TI). Properties : One Tier-2 for all three expts. CMS, ATLAS + LHCb; provides:  Simulation for experiment’s community (supply WLCG pledges)  End-user analysis for Swiss community  support (operation and data supply) for Swiss Tier-3 centres Standard LINUX compute cluster “PHOENIX” (similar to other Tier-2s) Hardware setup increased incrementally in phases : Technology choice so far: SUN blade centres + quad Opterons. Final size to be reached by ~ 2009/2010: NO tapes

Christoph Grab, ETH 7 Swiss Tier-2 : Cluster Evolution Growth corresponds to Swiss commitment in terms of compute resources supplied to the expt’s according to the signed MoU with WLCG.  In operation now: 960 cores  1600 kSI2k; total 520 TB storage  Last phase planned for Q4/09:  2500 kSI2k; ~ 1 PB storage Phase A Phase B Operational Phase C Planned Q4/09 Phase- 0 CG_0209

Christoph Grab May, Swiss LHC Tier-2 cluster “PHOENIX” System Phase B operational since Nov 2008  CPUs: total of 1600 SI2K SUN SB8000P blade centres; AMD Opterons 2.6 GHz CPU (quad)  Storage: 27 X systems net capacity of 510 TB

Christoph Grab, ETH 9 Incremental CPU-usage over last 4 years  hours = 512 CPU-years Swiss Tier-2 usage (6/05-2/09) Tier-2 is up and has been in stable operation ~4 years ! continuous contributions of resources to experiments. Our Tier-2 size is in line with other Tier-2 (e.g. London T2) Phase A CMS ATLAS Phase B LHCb CSCS-LCG2

Christoph Grab, ETH 10 Normalised CPU time per month (3/07-2/09) Normalised CPU time per month (3/07-2/09) Shares between VOs varies over time (production, challenges…) Spare cycles given to other VO (eg. H1, theory (CERN)... ) CMS ATLAS LHCb H1 others CSCS-LCG2

Christoph Grab, ETH 11 Shares of normalised CPU per VO (3/05-2/09) Shares between VOs overall reasonably balanced. Reliability CMS ATLAS LHCb H1 others CSCS-LCG2

Christoph Grab, ETH 12 Swiss Tier-2: Facts and Figures (2) Manpower for Tier-2: Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) support of experiment specifics by scientists of experiments; one contact person per experiment  in total ~2 FTE. Financing : Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; Operations and infrastructure provided by CSCS (ETHZ), additional support physicists provided by institutes. Network traffic: routing via SWITCH : two redundant lines to CERN and Europe transfer rates reached up to 10 TB /day from FZK (and CERN) FZK (Karlsruhe) is the associated Tier-1 for Switzerland.

Christoph Grab, ETH 13 Financing : Hardware and service, no manpower : R&D financed by institutes in 2004 Total financial contributions ( ) for incremental setup of Tier-2 cluster hardware only:  by Universities + ETH + EPFL + PSI  ~200 kCHF  by Federal funding (FORCE/SNF)  2.4 MCHF Planned investments:  in 09 : last phase ~ 1.3 MCHF.  >=2010 onwards: rolling replacements ~ 700 kCHF/year  Total investment up to Q1/2009 of ~ 2.6 MCHF; annual recurring costs expected (>2009) ~ 700 kCHF Swiss Tier-2: Facts and Figures (3)

Christoph Grab, ETH 14 Swiss Tier-2: Facts and Figures (2) Manpower for Tier-2: Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) support of experiment specifics by scientists of experiments; one contact person per experiment  in total ~2 FTE. Financing (HW and service, no manpower) : Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; Operations and infrastructure provided by CSCS (ETHZ) Network traffic: routing via SWITCH : two redundant lines from CSCS to CERN and Europe (SWITCH = Swiss academic network provider) FZK (Karlsruhe) is the associated Tier-1 for Switzerland. transfer rates reached up to 10 TB /day from FZK (and CERN)

Christoph Grab, ETH 15 Swiss Network Topology SWITCHlan Backbone: Dark Fiber Topology October Gbps T0 at CERN T2 at CSCS T1 at FZK

Christoph Grab, ETH 16 Status of the Swiss Tier-3 Centres

Christoph Grab, ETH 17 Swiss Tier-3 Efforts ATLAS : operates the Swiss ATLAS Grid  federation of clusters at  Bern uses local HEP cluster + shares university resources  Geneva operates local cluster+T2 CMS : ETHZ + PSI+ UZH run a combined Tier-3  located at and operated by PSI IT LHCb :  EPFL : operates new large local cluster  UZH uses local HEP + shares university resources Large progress seen over last year for all 3 experiments. Close national collaboration between Tiers:  Tier-3 contacts are ALSO experiment’s site contacts for CH Tier-2.  close contacts to Tier-1 at FZK.

Christoph Grab, ETH 18 Swiss Network and Tiers Landscape SWITCHlan Backbone: Dark Fiber Topology October Gbps T0 at CERN T2 at CSCS T1 at FZK CMS Tier-3 ATLAS Tier-3 Ge ATLAS Tier-3 Be LHCb Tier-3 EPFL LHCb Tier-3 UZH

Christoph Grab, ETH 19 Swiss Network and Tiers Landscape SWITCHlan Backbone: Dark Fiber Topology October Gbps T0 at CERN T2 at CSCS T1 at FZK CMS Tier-3 ATLAS Tier-3 Ge ATLAS Tier-3 Be LHCb Tier-3 EPFL LHCb Tier-3 UZH

Christoph Grab, ETH 20 Summary: Swiss Tier-3 Efforts (Q1/09) Nr cores CPU (kSI2k) Storage (TB) Comments ATLAS BE GE s 188 ~ BE: GRID usage since 2005 for Atlas production GE: identical SW-environment to CERN; direct line to CERN. CMS ETHZ, PSI, UZH 72~ Operates GRID storage element and user interface to enable direct GRID access. LHCb EPFL UZH 464 shared ~ EPFL is a DIRAC pilot site identical machines as in pit UZH:MC production; shared Total Tier-3 ~ cf: Tier-2: 1600 kSI2k, 520 TB Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis. ( … more details in backup slides) Note: CPU numbers are estimates; upgrades in progress … Status

Christoph Grab, ETH 21 Swiss non-LHC GRID Efforts (1) Physics community:  Theory : some use idle Tier-2 / Tier-3 resources (CERN,…)  HEP Neutrino community: own clusters, or CERN lxplus…  PSI : synergies of Tier-3 know-how for others: ESRFUP project (collab. with ESRF, DESY and SOLEIL for  Others use their own resources (smaller clusters..) Several other Grid projects exist in the Swiss academic sector: EU Projects: EGEE-II, KnowARC, DILIGENT, CoreGrid, GridChem... International Projects: WLCG, NorduGRID, SEPAC, PRAGMA, … National Projects: Swiss Bio Grid, AAA/Switch.. Various Local Grid activities (infrastructure, development…): Condor CampusGrids, local University clusters …

Christoph Grab, ETH 22 Swiss non-LHC GRID Efforts (2) SWING: Swiss national GRID association: by ETHZ, EPFL, Cantonal Univ., Univ. of Applied Sciences, and by CSCS, SWITCH, PSI,... Provide a platform for interdisciplinary collaboration to leverage the Swiss Grid activities Represent the interests of the national Grid community towards other national and international bodies  aims to become NGI to EGI Activities organised in working groups:  HEP provides strong input, e.g.  ATLAS GRID is organised in SWING working group  strong involvement of CSCS and SWITCH see

Christoph Grab, ETH 23 Summary – Swiss Activities Common operation of ONE single Swiss Tier-2 Reliably operates and delivers the Swiss pledges to the LHC experiments in terms of computing resources since Q2/2005 Growth in size as planned, final size reached ~ end Compares well in size with other Tier-2s. Tier-3 centres strongly complement Tier-2 : operate in close collaboration – profit from know-how transfer. overall size of all Tier-3 is about 100% CPU / 50 % disk of Tier-2 HEP is (still) majoriy community in GRID activity in CH. We are prepared for PHYSICS ! HWW 2  2

Christoph Grab, ETH 24 S.Gadomski, A.Clark (UNI Ge) S.Haug, H.P. Beck (UNI Bern) C.Grab (ETHZ) [ chair CCB ] D.Feichtinger (PSI) Z.Chen, U.Langenegger (ETHZ) R.Bernet (UNIZH) P.Szczypka, J. Van Hunen (EPFL) Thanks to Tier-2 /-3 Personnel P.Kunszt (CSCS) F. Georgatos, J.Temple, S.Maffioletti, R.Murri (CSCS) and many more …

Christoph Grab, ETH 25 Optional slides

Christoph Grab, ETH 26 A.Clark, S.Gadomski (UNI Ge) H.P.Beck, S.Haug (UNI Bern) C.Grab (ETHZ) chair CCB D.Feichtinger (PSI) vice-chair CCB U. Langenegger (ETHZ) R.Bernet (UNIZH) J. Van Hunen (EPFL) P. Kunszt (CSCS) CHIPP Computing Board Coordinates the Tier-2 activities representatives of all institutions and experiments

Christoph Grab, ETH 27 The Swiss ATLAS Grid Swiss ATLAS GRID federation is based on: CSCS (T2) to T3s in Bern and Geneva For Total resources in 2009 ~800 cores and ~400 TB

Christoph Grab, ETH 28 ATLAS Tier3 at U. Bern ATLAS Tier3 at U. Bern Hardware in production In local cluster 11 servers – ~30 worker CPU cores, ~30 TB disk storage In shared university cluster ~300 worker CPU cores (in 2009) Upgrade plans (Q4 2009) ~100 worker cores in local cluster Increased share on shared cluster Usage Grid site since 2005 (ATLAS production) Local resource for LHEP’s analyses and simulations S. Haug ~ 30 / 6% (CPU/disk) size of Tier-2

Christoph Grab, ETH 29 ATLAS Tier3 at U. Geneva Hardware in production 61 computers – 53 workers, 5 file servers, 3 service nodes 188 CPU cores in the workers 44 TB of disk storage Upgrade plans grid Storage Element with 105 TB (Q1 2009) Advantages of the Geneva Tier3 environment like at CERN, latest ATLAS software via AFS direct line to CERN (1 Gbps) popular with ATLAS physicists (~60 users) S. Gadomski ~ 20/9% size of Tier-2

Christoph Grab, ETH 30 CMS : common Tier 3 at PSI Common CMS Tier-3 for ETH, PSI, UZH groups in operation at PSI since Q4/ Gbps connection PSI  ZH Upgrade in 2009 by factor 2 planned. Year CPU / kSI2k Disk / TB ~ 15/20% size of Tier-2 +2 more X4500 ~ 25 users now, growing operates GRID storage element and user interface to enable users direct GRID access. Local production jobs D.Feichtinger

Christoph Grab, ETH 31 LHCb: Tier-3 at EPFL Hardware and Software: Machines identical to those in the LHCb pit  58 Worker nodes x8 cores (~840 kSI2k)  36 Tb of storage Uses SLC4 binaries of LHCb software and DIRAC Development builds Current Status and Operation: EPFL is one of the pilot DIRAC Sites Custom DIRAC interface for batch access Active development to streamline GRID usage Aim to run official LHCb MC production P. Szczypka ~ 50 / 7 % size of Tier-2

Christoph Grab, ETH 32 Hardware in production Zurich local HEP Cluster:  small Intel Cluster for local LHCb jobs  Hardware: CPU: 125 kSI2k, Disk: ~15 TB Shared Zurich Matterhorn Cluster at IT-dept. of UZH:  Only used for Monte Carlo Production (~ 25 kSI2k)  Replacement in Q3/2009 in progress Usage: local analysis resource Monte Carlo Production for LHCb LHCb : Tier-3 at U. Zurich ~ 10 / 5 % size of Tier-2 by R. Bernet

Christoph Grab, ETH 33 Summary: Swiss Tier-3 Efforts (Q1/09) Nr cores CPU (kSI2k) Storage (TB) Comments ATLAS BE GE s 188 ~ BE: GRID usage since 2005 for Atlas production GE: identical SW-environment to CERN; direct line to CERN. CMS ETHZ, PSI, UZH 72~ Operates GRID storage element and user interface to enable direct GRID access. LHCb EPFL UZH 464 shared ~ EPFL is a DIRAC pilot site identical machines as in pit UZH:MC production; shared Total Tier-3 ~ cf: Tier-2: 1600 kSI2k, 520 TB Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis. ( … more details in backup slides) Note: CPU numbers are estimates; upgrades in progress … Status

Christoph Grab, ETH 34 non-LHC GRID Efforts KnowARC: Grid-enabled Know-how Sharing Technology Based on ARC Services and Open Standards" (KnowARC) ; Next Generation Grid middleware based on NorduGrid's ARC CoreGRID : European Network of Excellence (NoE) aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. GridCHEM: " Computational Chemistry Grid" (CCG) is a virtual organization that provides access to high performance computing resources for computational chemistry (mainly US) DILIGENT: Digital Library Infrastructure on Grid Enabled Technology (6 th FP) SEPAC: Southern European Partnership for Advanced Computing grid project PRAGMA: Pacific Rim Application and Grid Middleware Assembly (PRAGMA)