Status of the NL-T1. BiG Grid – the dutch e-science grid Realising an operational ICT infrastructure at the national level for scientific research (e.g.

Slides:



Advertisements
Similar presentations
Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Advertisements

Nikhef Jamboree 2008 BiG Grid Update Jan Just Keijser.
Nikhef site report BiG Grid All-Hands – june 2009.
Grid Jeff Templon PDP Group, NIKHEF NIKHEF Jamboree 22 december 2005 Throbbing jobsGoogled Grid.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Infrastructure overview Arnold Meijster &
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Chapter © 2006 The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/ Irwin Chapter 7 IT INFRASTRUCTURES Business-Driven Technologies 7.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Astro-WISE & Grid Fokke Dijkstra – Donald Smits Centre for Information Technology Andrey Belikov – OmegaCEN, Kapteyn institute University of Groningen.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LOFAR Archive Information System Kor Begeman.
Dutch Tier Hardware Farm size –now: 150 dual nodes + scavenging 200 nodes –buildup to ~1500 up-to-date nodes in 2007 Network –now: 2 Gbit/s internatl.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
CERN Physics Database Services and Plans Maria Girone, CERN-IT
The DutchGrid Platform – An Overview – 1 DutchGrid today and tomorrow David Groep, NIKHEF The DutchGrid Platform Large-scale Distributed Computing.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Nikhef/(SARA) tier-1 data center infrastructure
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
24 x 7 support in Amsterdam Jeff Templon NIKHEF GDB 05 september 2006.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Physics Data Processing at NIKHEF Jeff Templon WAR 7 May 2004.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
Research organization technology David Groep, October 2007.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid Computing Jeff Templon Programme: Group composition (current): 2 staff, 10 technicians, 1 PhD. Publications: 2 theses (PD Eng.) 16 publications.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
J. Templon Nikhef Amsterdam Physics Data Processing Group “Grid” Computing J. Templon SAC, 26 April 2012.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
A Dutch LHC Tier-1 Facility
NL Service Challenge Plans
Update on Plan for KISTI-GSDC
Quattor Usage at Nikhef
Ákos Frohner EGEE'08 September 2008
Presentation transcript:

Status of the NL-T1

BiG Grid – the dutch e-science grid Realising an operational ICT infrastructure at the national level for scientific research (e.g. High Energy Physics, Life Sciences and others). Projects includes: hardware, operations and support. Project time: 2007 – 2011 Project budget: 29 M€ –Hardware and operations (incl. people): 16 M€ (lion’s share for HEP). 4 central facility sites ( NIKHEF, SARA, Groningen, Eindhoven ) 12 small clusters for Life Sciences wLCG Tier-1 is run as a service

BiG Grid all hands

Tier-1 by BiG Grid (history) The Dutch Tier-1 (NL-T1) is run as BiG Grid service by the operational partners SARA and NIKHEF Activity initiated by (involved in EDG and EGEE) and SARA (involved in EDG and EGEE) At that point chosen for a 2 site setup –Nikhef: Compute and Disk –SARA: Compute, Disk, Mass Storage, Database and LHCOPN networking No real Tier-2 in Netherlands and no direct support for Tier-2

Tier-1 people The NL-T1 operations team –Maurice Bouwhuis – NL-T1 manager (groupleader SARA, wLCG-MB) –Jeff Templon – NL-T1 manager-alt (groupleader Nikhef, wLCG-MB, GDB) –Ron Trompert (Grid services, Front end Storage, EGEE-ROC manager, head of ops SARA) –Ronald Starink (head of ops Nikhef –Ramon Batiaans (Grid compute en services) –Paco Bernabe Pellicer (grid ops) –David Groep (grid ops, backup MB) –Maarten van Ingen (Grid services and Grid compute) –Hanno Pet (LHC networking) –Jurriaan Saathof (LHC networking and Mass Storage) –Mark van de Sanden (head Mass Storage) –Tristan Suerink (grid ops) –Luuk Uljee (Grid services and Grid compute) –Alexander Verkooijen (3DB) –Rob van der Wal (3DB) –Onno Zweers (Grid Front End Storage and Services)

Tier-2 support Tier-2’s connected to NL-T1, none in the Netherlands (Israel, Russia, Turkey, Ireland, Northern UK as a guest) NL-T1 will provide FTS channels NL-T1 tries to provide answers to their questions NL-T1 can not provide integrated tickets handling for these Tier-2’s ( ticket assigned to NL-T1 that Russian Tier-2 has problem is bounced ). Hurng acts as liaison between NL-T1 and the Atlas Tier-2’s Asked ATLAS last year: this is enough

T1 hardware resource for Atlas Atlas ResourcesSideDecember 2009March 2010 ComputingS14k HEPSPEC N Front End StorageS1200 TB2000 TB N1000 TB Tape StorageS800 TB2100 TB (after March) Bandwidth to tapeS450 MBps Atlas is allocated 70% of total resource for HEP

Architecture overview

Technical Issues over the past year Lack of storage and Compute resources [fixed] Network bandwidth between Compute and Storage [fixed] Bandwidth to Mass Storage tape component [half fixed, ongoing] Monitoring infrastructure [ongoing]

Mass Storage Infrastructure Disk Cache: 22 TB Tape Drives : –12 T10k drives –8 9940B 4 Data Mover nodes Planned bandwidt to and from tape 1 GB/s

Plans for 2010 Mass Storage Upgrade –Upgrade DMF to a distributed DMF environment Data Movers (cxfs clients) read/write to tape directly Upgrade the DMF database and CXFS meta data server to new hardware –Extend number of tape drives –Extend number of Data/tape movers (if needed) –Configure disk cache for small files –Extend fiber channel bandwidth between Amsterdam and Almere for tape access Extend monitoring (ganglia, Nagios) Continuous small stuff for reliability and redundancy

Grid funding after 2011 Grid activities now funded on project basis Active project to ensure structural funding for these activities (among them the T1). Next step in this process in 2010.

»Questions ?