NDGF The Distributed Tier1 Site.

Slides:



Advertisements
Similar presentations
THIS TEXT WILL NOT BE SHOWN DURING PRESENTATION! Design by Jon Angelo Gjetting Reproducability not allowed without explicit.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
Impact: Cloud Computing Theresa Rowe Educause Live.
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Agenda and NDGF status Olli Tourunen HIP-NDGF meeting CSC, May 7 th 2007.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Rutherford Appleton Lab, UK VOBox Considerations from GridPP. GridPP DTeam Meeting. Wed Sep 13 th 2005.
Computing Jiří Chudoba Institute of Physics, CAS.
, VilniusBaltic Grid1 EG Contribution to NEEGRID Martti Raidal On behalf of Estonian Grid.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Scientific Computing in PPD and other odds and ends Chris Brew.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Mohssen Mohammed Sakib Pathan Building Customer Trust in Cloud Computing with an ICT-Enabled Global Regulatory Body Mohssen Mohammed Sakib Pathan.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
Storage discovery in AliEn
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Interoperations in EGEE III
Elastic Cyberinfrastructure for Research Computing
Status of WLCG FCPPL project
WLCG Workshop 2017 [Manchester] Operations Session Summary
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
SuperB – INFN-Bari Giacinto DONVITO.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The Beijing Tier 2: status and plans
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
U.S. ATLAS Tier 2 Computing Center
Operations and plans - Polish sites
ATLAS Cloud Operations
How to enable computing
Readiness of ATLAS Computing - A personal view
Castor services at the Tier-0
Advancements in Availability and Reliability computation Introduction and current status of the Comp Reports mini project C. Kanellopoulos GRNET.
Mattias Wadenstein Hepix 2010 Fall Meeting , Cornell
Mattias Wadenstein Hepix 2011 Fall Meeting , Vancouver
Mattias Wadenstein Hepix 2012 Spring Meeting , Prague
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Artem Trunov, Günter Quast EKP – Uni Karlsruhe
Nordic ROC Organization
New strategies of the LHC experiments to meet
Pierre Girard ATLAS Visit
ETHZ, Zürich September 1st , 2016
Accounting Repository
Computer communications
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The LHCb Computing Data Challenge DC06
Presentation transcript:

NDGF The Distributed Tier1 Site

Overview Reason Design Reality Experience

Reason No Nordic country is big enough to host a Tier-1 All the Nordic countries together are big enough to warrant a Tier-1 6% of ATLAS and 9% of ALICE Tier-1 resources Currently distributed among seven HPC sites from Copenhagen in the south to Umeå in the north Plus Slovenian Tier-2 disk installed technically as NDGF T1 disk

Design Computing on ARC It's a grid, so just run jobs wherever Mostly on shared HPC resources Form the Tier-1 and Tier-2 sites in accounting and BDII Storage in distributed dCache Took a bit of development effort to make it appear as one endpoint Central headnodes, pools out at sites Ops/Devel staff mix Fix software problems in the software projects instead of creating procedures to handle them in operations

Reality - ARC

Reality - Networking The clouds point to where there are dCache pools

Reality - Storage Pools at 7 different sites Namespace, admin, and doors centrally Graphs show a busy Alice day and a calm Atlas day

Experience Distributed storage great for data taking Somewhat less so for data availability Computing on ARC with data staging makes short interventions on storage transparent Distributed funding at least as challenging as distributed storage and computing Professional HPC sites are good for running clusters and storage, professional developers are good at fixing software issues and making users and ops lives happy

Questions?