Download presentation
Presentation is loading. Please wait.
Published byMoses Ethelbert Cunningham Modified over 8 years ago
1
Status of the NL-T1
2
BiG Grid – the dutch e-science grid Realising an operational ICT infrastructure at the national level for scientific research (e.g. High Energy Physics, Life Sciences and others). Projects includes: hardware, operations and support. Project time: 2007 – 2011 Project budget: 29 M€ –Hardware and operations (incl. people): 16 M€ (lion’s share for HEP). 4 central facility sites ( NIKHEF, SARA, Groningen, Eindhoven ) 12 small clusters for Life Sciences wLCG Tier-1 is run as a service
3
BiG Grid all hands
4
Tier-1 by BiG Grid (history) The Dutch Tier-1 (NL-T1) is run as BiG Grid service by the operational partners SARA and NIKHEF Activity initiated by PDP-group@NIKHEF (involved in EDG and EGEE) and SARA (involved in EDG and EGEE) At that point chosen for a 2 site setup –Nikhef: Compute and Disk –SARA: Compute, Disk, Mass Storage, Database and LHCOPN networking No real Tier-2 in Netherlands and no direct support for Tier-2
5
Tier-1 people The NL-T1 operations team –Maurice Bouwhuis – NL-T1 manager (groupleader SARA, wLCG-MB) –Jeff Templon – NL-T1 manager-alt (groupleader Nikhef, wLCG-MB, GDB) –Ron Trompert (Grid services, Front end Storage, EGEE-ROC manager, head of ops SARA) –Ronald Starink (head of ops Nikhef –Ramon Batiaans (Grid compute en services) –Paco Bernabe Pellicer (grid ops) –David Groep (grid ops, backup MB) –Maarten van Ingen (Grid services and Grid compute) –Hanno Pet (LHC networking) –Jurriaan Saathof (LHC networking and Mass Storage) –Mark van de Sanden (head Mass Storage) –Tristan Suerink (grid ops) –Luuk Uljee (Grid services and Grid compute) –Alexander Verkooijen (3DB) –Rob van der Wal (3DB) –Onno Zweers (Grid Front End Storage and Services)
6
Tier-2 support Tier-2’s connected to NL-T1, none in the Netherlands (Israel, Russia, Turkey, Ireland, Northern UK as a guest) NL-T1 will provide FTS channels NL-T1 tries to provide answers to their questions NL-T1 can not provide integrated tickets handling for these Tier-2’s ( ticket assigned to NL-T1 that Russian Tier-2 has problem is bounced ). Hurng acts as liaison between NL-T1 and the Atlas Tier-2’s Asked ATLAS last year: this is enough
7
T1 hardware resource for Atlas Atlas ResourcesSideDecember 2009March 2010 ComputingS14k HEPSPEC N Front End StorageS1200 TB2000 TB N1000 TB Tape StorageS800 TB2100 TB (after March) Bandwidth to tapeS450 MBps Atlas is allocated 70% of total resource for HEP
8
Architecture overview
9
Technical Issues over the past year Lack of storage and Compute resources [fixed] Network bandwidth between Compute and Storage [fixed] Bandwidth to Mass Storage tape component [half fixed, ongoing] Monitoring infrastructure [ongoing]
10
Mass Storage Infrastructure Disk Cache: 22 TB Tape Drives : –12 T10k drives –8 9940B 4 Data Mover nodes Planned bandwidt to and from tape 1 GB/s
11
Plans for 2010 Mass Storage Upgrade –Upgrade DMF to a distributed DMF environment Data Movers (cxfs clients) read/write to tape directly Upgrade the DMF database and CXFS meta data server to new hardware –Extend number of tape drives –Extend number of Data/tape movers (if needed) –Configure disk cache for small files –Extend fiber channel bandwidth between Amsterdam and Almere for tape access Extend monitoring (ganglia, Nagios) Continuous small stuff for reliability and redundancy
12
Grid funding after 2011 Grid activities now funded on project basis Active project to ensure structural funding for these activities (among them the T1). Next step in this process in 2010.
13
»Questions ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.