Download presentation
Presentation is loading. Please wait.
Published byCora Lloyd Modified over 9 years ago
1
Conference xxx - August 2003 Anders Ynnerman Director Swedish National Infrastructure for Computing Griding the Nordic Supercomputer Infrastructure The importance of national and regional Grids Research Infrastructures – e-Infrastructure: Grid initiatives 27 May 2005
2
INFO Event, May 26th 2005 - 2 Outline of presentation The Nordic HPC landscape SweGrid testbed for production Benefits of a national Grid effort NorduGrid - ARC Nordic DataGrid Facility
3
INFO Event, May 26th 2005 - 3 DENMARK HPC program initiated 2001 Danish Center for Scientific Computing (DCSC) Distributed program Location of facilities based on scientific evaluation of participating groups not on computer centers or infrastructure Århus Odense DTU Copenhagen University Budget 2.1 MEuros (2001) 1.6 MEuros (2002) 1.5 MEuros (2003) 1.4 MEuros (2004) Copenhagen University is a partner in NorduGrid Danish Center for Grid Computing (DCGC) Virtual center located at 5 sites Main location at the Niels Bohr Institute, Copenhagen University 0.75 MEuros/year Operations and support of a Danish Grid Research agenda with 10 senior researchers and 5 PhD students Strong ties to NorduGrid and Nordic DataGrid Faicility Participation in EGEE through NBI HPC-infrastructureGRID-initiatives
4
INFO Event, May 26th 2005 - 4 NORWAY NOTUR – The Norwegian High- Performance Computational Infrastructure Trondheim Oslo Bergen Tromsö Budget 168 MEuros (2000-2003) 50% from reserach councils Statoil, DNMI In kind contributions New program similar is being initiated and placed under the NREN (UNINETT) Metacenter goal – To establish a uniform and as seamless as possible, access to HPC resources Parallab is contributing to EGEE security team Bergen and Oslo are partners in NorduGrid NORGRID project has been initiated under the umbrella of NOTUR HPC-infrastructureGRID-initiatives
5
INFO Event, May 26th 2005 - 5 FINLAND Center for Scientific Computing (CSC) is the (main) center in Finland The largest HPC center in Scandinavia Runs services for academic users all over Finland and FMI Budget Directly from the ministry of education Smaller university HPC-systems (clusters) are beginning to appear CSC is contributing resources to NorduGrid CSC is a partner in DEISA Helsinki Institute of Physics is running several GRID projects and is contributing to the EGEE security team A Finnish Grid dedicated to Condensed Matter Physics – example of topical Grids HPC-infrastructureGRID-initiatives
6
INFO Event, May 26th 2005 - 6 SWEDEN Swedish National Infrastructure for Computing (SNIC) formed during 2003 6 participating centers Umeå Uppsala Stockholm Linköping Göteborg Lund National resource allocation (SNAC) Budget 4.5 MEuros/year (Government) 4.0 MEuros/year private foundations Lund and Uppsala are partners in NorduGrid Large resource contributions to NorduGrid are SNIC clusters at Umeå and Linköping SweGrid production Grid (600 CPUs) over 6 sites 120 Tb storage SNIC is hosting the EGEE Nordic Regional Operations Center Stockholm (PDC) is coordinating the EGEE security activity PDC is a partner in the European Grid Support Center HPC-infrastructureGRID-initiatives
7
INFO Event, May 26th 2005 - 7 Grids in the context of HPC centers Throughput computing on Grids Loosely coupled workstations Clusters with Ethernet Clusters with High Speed Interconnect Large Shared Memory Systems Parallel Vector Processors Price/Performance 1/ No of Users
8
INFO Event, May 26th 2005 - 8 SweGrid production testbed The first step towards HPC center Gridification Initiative from All HPC-centers in Sweden IT-researchers wanting to research Grid technology Users Life Science Earth Sciences Space & Astro Physics High energy physics PC-clusters with large storage capacity Build for GRID production Participation in international collaborations LCG EGEE NorduGrid …
9
INFO Event, May 26th 2005 - 9 SweGrid subprojects 0.25 MEuro/year -Portals -Databases -Security Globus Alliance EGEE - security 0.25 MEuro/year 6 Technicians Forming the core team for the Northern EGEE ROC 2.5 MEuro 6 PC-clusters 600 CPUs for throughput computing SweGrid Test-bed GRID-research Technical deployment and implementation Hardware
10
INFO Event, May 26th 2005 - 10 SweGrid production test bed Total budget 3.6 MEuro 6 GRID nodes 600 CPUs IA-32, 1 processor/server 875P with 800 MHz FSB and dual memory busses 2.8 GHz Intel P4 2 Gbyte Gigabit Ethernet 12 TByte temporary storage FibreChannel for bandwidth 14 x 146 GByte 10000 rpm 200 TByte nearline storage 140 TByte disk 270 TByte tape 1 Gigabit direct connection to SUNET (10 Gbps)
11
INFO Event, May 26th 2005 - 11 SUNET connectivity GigaSunet 10 Gbit/s 2.5 Gbit/s SweGrid 1 Gbps Dedicated Univ. LAN 10 Gbit/s Typical POP at Univ.
12
INFO Event, May 26th 2005 - 12 Persistent storage on SweGrid? Size Administration Bandwidth Availability 1 2 3
13
INFO Event, May 26th 2005 - 13 SweGrid status All nodes installed during January 2004 Extensive use of the resources already Local batch queues GRID queues through the NorduGrid middlware – ARC Some nodes also available on LCG 60 national users 1/3 of SweGrid is dedicated to HEP (200 CPUs) Contributing to Atlas Data Challenge 2 As a partner in NorduGrid Also supporting LCG (gLite) Investigating compatibility between ARC and LCG Forms the core of the Northern EGEE ROC Accounting is being introduced - SGAS
14
INFO Event, May 26th 2005 - 14 The first users of SweGrid
15
INFO Event, May 26th 2005 - 15 How have you found porting your applications to SweGrid to be?
16
INFO Event, May 26th 2005 - 16 What is your overall impression of use of SweGrid resources?
17
INFO Event, May 26th 2005 - 17 Do you think all supercomputing resources should be available on a Grid?
18
INFO Event, May 26th 2005 - 18 SweGrid Observations Global user identity Each SweGrid users must receive a unique x509-certifikat All centers must agree on a common lowest level of security. This will affect general security policy for HPC centers. Unified support organization All helpdesk activities and other support needs to be coordinated between centers. Users can not decide where their jobs will be run (should not) and expect the same level of service at all sites. More bandwidth is needed To be able to move data between the nodes in SweGrid before and after execution of jobs continuously increasing bandwidth will be needed More storage is needed Users can despite increasing bandwidth not fetch all data back home. Storage for both temporary and permanent data will be needed in close proximity to processor capacity
19
INFO Event, May 26th 2005 - 19 SweGrid results Unified the Swedish Supercomputing Community Raised additional private funding for computing infrastructure Spawned national Grid R&D projects Created early user Grid awareness Created an interface to EU-projects EGEE Regional operations center co-operated with SweGrid Co-ordination of EGEE security DEISA participation through PDC at KTH Co-ordination of Baltic Grid Initiative
20
INFO Event, May 26th 2005 - 20 National projects and services Grid resource brokering, Umeå Grid accounting, SweGrid Accounting System (SGAS), Stockholm, Uppsala, Umeå Semi-automatic Grid interface-generation for numerical software libraries, Umeå Grid security research – focused on trust models, Stockholm, Umeå and Uppsala Parallel Object Query System for Expensive Computations (POQSEC) on Grids, Uppsala National Helpdesk, all SweGrid centers National storage solutions, Linköping, Stockholm, Umeå
21
INFO Event, May 26th 2005 - 21 North European EGEE ROC (SA1) SweGrid forms the core with three sites HPC2N 100 CPU cluster LCG2.4, SL3 CE, Debian WN SE: 10 TByte Worker nodes are shared between EGEE and SweGrid (ARC) PDC Migration of SweGrid 100 CPU system to EGEE Eventually new 884 CPU Xeon EM64T cluster NSC 32 CPU cluster on pre-production testbed –SL3 –LCG23 (Upgrade week after 3rd EGEE Conference) SGAS (Grid bank), (Module for export of accounting information from SGAS to GOC accounting has been developed. Tests and implementation late April early May. (Specifically for accounting of Atlas VO jobs) Preparation for RC’s in Finland and Norway has started
22
INFO Event, May 26th 2005 - 22 EGEE security effort (JRA3) 12 FTE from the Northern Federation KTH (lead), FOM, UvA, UiB and UH-HIP Existence of a national Grid and high security competence at PDC was key in accepting the co- ordination role Responsible for overall EGEE Security Architecture partner in the Middleware design Responsible for EGEE Security software development close collaboration with Middleware development
23
INFO Event, May 26th 2005 - 23 BalticGrid Extending the European eInfrastructure to the Baltic region Partners: 10 partners from five countries in the Baltic region and Switzerland (CERN) Budget: 4.4M€ over 30 months Coordinator: KTH PDC, Stockholm Resources: 17 resource centres from start Major part of the work to be performed by the Estonian, Latvian and Lithuanian partners
24
INFO Event, May 26th 2005 - 24 SweGrid Future Long term vision is to have all HPC resources (computers & storage) in Sweden available through SweGrid Accounting will be/is available soon (SGAS) Clusters will be connected to SweGrid New loosely coupled clusters for throughput computing Existing and new clusters with high speed interconnect SweGrid should grow 10x to provide a national throughput Grid that can be a resource in international projects Provide a platform for Swedish participation in new EU- projects
25
INFO Event, May 26th 2005 - 25 The NorduGrid project Started in January 2001 & funded by NorduNet-2 Initial goal: to deploy DataGrid middleware to run “ATLAS Data Challenge” NorduGrid essentials Built on GT-2 Replaces some Globus core services and introduces some new services Grid-manager, Gridftp, User interface & Broker, information model, Monitoring Middleware named ARC Track record Used in the ATLAS DC tests in May 2002 Contributed 30% of the total resources to ATLAS DC II Continuation Could be included in the framework of the ”Nordic Data Grid Facility” Co-operation with EGEE/LCG
26
INFO Event, May 26th 2005 - 26 Resources running ARC Currently available resources: 10 countries, 40+ sites, ~4000 CPUs, ~30 TB storage 4 dedicated test clusters (3-4 CPUs) SweGrid Few university production-class facilities (20 to 60 CPUs) Three world-class clusters in Sweden and Denmark, listed in Top500 Other resources come and go Canada, Japan – test set-ups CERN, Russia – clients Australia Estonia Anybody can join or part People: the “core” team grew to 7 persons local sys admins are called up when users need an upgrade
27
INFO Event, May 26th 2005 - 27
28
INFO Event, May 26th 2005 - 28 Reflections on NorduGrid Bottom up project driven by an application motivated group of talented people Middleware adaptation and development has followed a flexible and minimally invasive approach Nordic HPC centers have “connected” large general purpose resources since it is good PR for the centers As soon as “NorduGrid” usage of these resources increases they will be disconnected. There is no such thing as free cycles! Motivation of resource allocations is missing NorduGrid lacks an approved procedure for resource allocation to VOs and individual user groups based on scientific reviews of proposals Major obstacle for utilization of resources on Grids in general!
29
INFO Event, May 26th 2005 - 29 HPC-center Users Country 2 Challenges HPC-center Users Country 4 HPC-center Users Country 1 Funding agency country 4 HPC-center Users Country 3 Funding agency country 3 Funding agency country 2 Funding agency country 1 Current HPC setup
30
INFO Event, May 26th 2005 - 30 GRID management Users VO 1 VO 2 VO 3 MoU SLAs Proposals Authorization Middleware Other GRIDs Accounting Funding agency Country 1 HPC-center Funding agency Country 1 HPC-center Funding agency Country 1 HPC-center Funding agency Country 1 HPC-center
31
INFO Event, May 26th 2005 - 31 eIRG Policy framework Authentication Authorization Accounting Resource allocations SLAs MoUs Grid Economies Strong Nordic interest and representation in eIRG Collaboration between Nordic national Grids needs eIRG policy
32
INFO Event, May 26th 2005 - 32 Nordic Data Grid Facility - Vision To establish and operate a Nordic computing infrastructure providing seamless access to computers, storage and scientific instruments for researchers across the Nordic countries. Taken from proposal to NOS-N
33
INFO Event, May 26th 2005 - 33 Nordic DataGrid Facility - Mission operate a Nordic production Grid building on national production Grids operate a core facility focusing on Nordic storage resources for collaborative projects develop and enact the policy framework needed to create the Nordic research arena for computational science co-ordinate and host Nordic-level development projects in high performance and Grid computing. create a forum for high performance computing and Grid users in the Nordic Countries be the interface to international large scale projects for the Nordic high performance computing and Grid community
34
INFO Event, May 26th 2005 - 34 Conclusions Nordic countries are early adopters of GRID technology There are several national GRIDs These Grids have enabled participation in EU infrastructure projects Nordic representation in GGF, eIRG, etc. Several advanced partners for future Grid related EU projects available in the Nordic region A Nordic GRID is currently being planned Nordic policy framework Nordic interface to other projects
35
INFO Event, May 26th 2005 - 35
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.