A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Nordic Data Grid Facility NDGF – Paula Eerola, paula.eerola [at] hep.lu.se paula.eerola [at] hep.lu.sepaula.eerola [at] hep.lu.se 1st Iberian.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
A Distributed Tier-1 An example based on the Nordic Scientific Computing Infrastructure GDB meeting – NIKHEF/SARA 13th October 2004 John Renner Hansen.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
May pre GDB WLCG services post EGEE Josva Kleist Michael Grønager Software Coord NDGF Director CERN, May 12 th 2009.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
DCache in the NDGF Distributed Tier 1 Gerd Behrmann Second dCache Workshop Hamburg, 18 th of January 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Servizi core INFN Grid presso il CNAF: setup attuale
Bob Jones EGEE Technical Director
WLCG IPv6 deployment strategy
NDGF The Distributed Tier1 Site.
(Prague, March 2009) Andrey Y Shevel
GRID OPERATIONS IN ROMANIA
The EDG Testbed Deployment Details
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Virtualization and Clouds ATLAS position
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
U.S. ATLAS Tier 2 Computing Center
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
LCG 3D Distributed Deployment of Databases
Database Services at CERN Status Update
Grid Computing for the ILC
Update on Plan for KISTI-GSDC
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Clouds of JINR, University of Sofia and INRNE Join Together
Overview of IPB responsibilities in EGEE-III SA1
UTFSM computer cluster
Southwest Tier 2.
Mattias Wadenstein Hepix 2010 Fall Meeting , Cornell
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Nordic ROC Organization
Simulation use cases for T2 in ALICE
Pierre Girard ATLAS Visit
ETHZ, Zürich September 1st , 2016
Presentation transcript:

A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007

Overview Background Organization / Governance Tier-1 Services: – Computing – Storage – ATLAS – ALICE – Accounting – Monitoring Operation

Background Nordic Countries constitute together 25Mio People No country is bigger than 10Mio People Nordic Tier-1 needs to utilize hardware at bigger Nordic compute sites Strong Nordic grid tradition: NorduGrid / ARC – Deployed at all Nordic compute centers – Used heavily also by non-HEP users (>75%) Need for a pan-Nordic organization for Tier-1 and possibly other huge inter/Nordic e-Science projects

What is the NDGF? A Co-operative Nordic Data and Computing Grid facility –Nordic production grid, leveraging national grid resources –Common policy framework for Nordic production grid –Joint Nordic planning and coordination –Operate Nordic storage facility for major projects –Co-ordinate & host major e-Science projects (i.e., Nordic WLCG Tier-1) –Develop grid middleware and services NDGF –Funded (2 M.EUR/year) by National Research Councils of the Nordic countries –Builds on a history of Nordic grid collaboration NOS-N DKDK SFSF N S Nordic Data Grid Facility

NORDUnet A/S The Nordic Regional Research and Educational Network (RREN) Owned by the 5 Nordic National RENs 25 Years of Nordic network collaboration Leverage National Initiatives Participates in major international efforts Represents Nordic NRENS internationally, gateway to the Nordic area

Organization – CERN related

NDGF Tier-1 Resource Centers The 7 biggest Nordic compute centers, dTier-1s, form the NDGF Tier-1 Resources (Storage and Computing) are scattered Services can be centralized Advantages in redundancy Especially for 24x7 data taking

NDGF Facility Q3

Tier-1 Services: Networking Today NDGF is connected directly with GEANT 10GBit fiber to CERN Inter-Nordic shared 10Gbit network from NORDUnet A Dedicated 10Gbit LAN covering all dTier-1 centers next year

Tier-1 Services: Networking / OPN CER N LHC NORDUnet NRE N Örestaden HPC2N PDC NSC Central host(s) National IP network National Sites Nationa l Switch NDGF AS - A S39590

Tier-1 Services: Computing NorduGrid / ARC middleware for Computing Used routinely since 2002 for e.g. ATLAS data challenges Deployed at all the dTier-1 sites

Storage dCache Installation Admin and Door nodes at GEANT endpoint Pools at sites Very close collaboration with DESY to ensure dCache is suited also for distributed use

NDGF Storage

Storage Central Installation: –7 Dell xDual Core 2GHz Xeon, 4GB RAM, 2 x 73GB 15k SAS disks (mirrored) (one forspare) –2 x Dell PowerVault MD-1000 direct attached storage enclosures with 7 x 143GB 15k SAS RAID-10 each Running: –2 Postgress for PNFS running in HA mode (master-slave) DB on MD-1000 –1 PNFS Manager and Pool Manager –1 SRM, location manager, statistics, billing, etc. –1 GridFTP and xrootd door on one machine –1 Monitoring and intrusion detection on one machine

Storage Central Installation: –7 Dell xDual Core 2GHz Xeon, 4GB RAM, 2 x 73GB 15k SAS disks (mirrored) (one forspare) –2 x Dell PowerVault MD-1000 direct attached storage enclosures with 7 x 143GB 15k SAS RAID-10 each Running: –2 Postgress for PNFS running in HA mode (master-slave) DB on MD-1000 –1 PNFS Manager and Pool Manager –1 SRM, location manager, statistics, billing, etc. –1 GridFTP and xrootd door on one machine –1 Monitoring and intrusion detection on one machine This is our only hardware!

Commercial ! See talk on dCache and gridFTP2 A Distributed Storage System with dCache Carson Hall C at 16.50

FTS Running FTS2.0 Patched version of Globus supporting GridFTP2 Located in Linkjöping: –1 Server for FTS –1 Server for Oracle database Channels: –STAR-NDGF –others...

3D Minimal setup located in Helsinki: –One dual core dual Xeon box with 4GB of memory no RAC, just one server –High availability SAN storage –a bit more than one TB of space allocated for data –upgrade to 3-5 node RAC in 2008

ATLAS VO Services ATLAS VOBox (ARC flavor) services fully implemented – ARC uses Globus RLS – US-ATLAS-LRC view on the mysql – Enables outside ATLAS subscription to data stored on old Ses – and internal through RLS See poster #74 Wednesday

ALICE VO Services ALICE VOBox boxes: – Jyväskylä – CSC – NSC – LUNARC – DCSC/KU – UiB – using submission via ARC – Örestaden – xrootd storage frontend See poster #75 Wednesday

Service Availability Monitoring SAM sensors: – BDII – SE – SRM – FTS – ARC-CE This is the only different sensor as compared to other sites

Accounting Sites report using SGAS – (SweGrid Accounting System) SGAS report translated to APEL Injected into the APEL DB Functional from September 07 – some sites already accounted

Operation

Conclusions We have build a distributed Tier-1 – dCache – for storage – ARC for computing Interoperabel with: – ALICE – ATLAS – ARC monitoring and accounting – LCG monitoring and accounting It works !