Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.

Slides:



Advertisements
Similar presentations
Large-Scale, Adaptive Fabric Configuration for Grid Computing Peter Toft HP Labs, Bristol June 2003 (v1.03) Localised for UK English.
Advertisements

© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Installation & management of SUSE.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Installation & management of SUSE.
SE Name SE Title Blackboard Training: Approaches and Opportunities.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
© Otaverkko Oy 8/2000 MediaPoli - gigabit piloting environment Tuomo Karhapää Network Manager Otaverkko Oy
Storage Services Let the data flow! NorduNet 2008,.fi, 9 April 2008 Jan Meijer.
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
The National Grid Service Mike Mineter.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
Welcome to Middleware Joseph Amrithraj
Development of China-VO ZHAO Yongheng NAOC, Beijing Nov
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
E-IRG Workshop CSC, October 4, 2006 Risto M. Nieminen Helsinki University of Technology HELSINKI UNIVERSITY OF TECHNOLOGY.
Estonian Grid Mario Kadastik On behalf of Estonian Grid Tallinn, Jan '05.
Beowulf Supercomputer System Lee, Jung won CS843.
Information Technology Center Introduction to High Performance Computing at KFUPM.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
Bright Cluster Manager Advanced cluster management made easy Dr Matthijs van Leeuwen CEO Bright Computing Mark Corcoran Director of Sales Bright Computing.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
CCS Overview Rene Salmon Center for Computational Science.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
Deploying a Network of GNU/Linux Clusters with Rocks / Arto Teräs Slide 1(18) Deploying a Network of GNU/Linux Clusters with Rocks Arto Teräs.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Cluster Software Overview
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
The Finnish Grid Infrastructure Computing Environment and Tools Wednesday, 21 st of May EGI Community Forum 2014 Helsinki Luís Alves Systems Specialist.
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Background Computer System Architectures Computer System Software.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
NGIs – Turkish Case : TR-Grid
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Patrick Dreher Research Scientist & Associate Director
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004

Material Sciences Grid / Arto Teräs Slide 2(19) Contents ● M-grid overview ● Partners ● Hardware and software ● System administration ● Grids and NorduGrid ● M-grid vs. Swegrid ● New challenges, open questions

Material Sciences Grid / Arto Teräs Slide 3(19) M-grid Overview ● Joint project between seven Finnish universities, Helsinki Institute of Physics and CSC ● Jointly funded by the Academy of Finland and the participating universities ● First large initiative to put Grid middleware into production use in Finland ● Coordinated by CSC, users and local site admins from the physics and chemistry labs of the partner universities ● Based on Linux clusters, targeted for serial and pleasantly parallel applications

Material Sciences Grid / Arto Teräs Slide 4(19) Partners ● Laboratory of Physics at Helsinki University of Technology ● Department of Physical Sciences and Department of Chemistry at the University of Helsinki ● Helsinki Institute of Physics (HIP) ● Departments of Physics and Chemistry at the University of Jyväskylä ● Department of Electrical Engineering at Lappeenranta University of Technology ● Department of Physics and the Institute of Advanced Computing at Tampere University of Technology ● Department of Physics at the University of Turku ● Departments of Biochemistry, Chemistry and Physical Sciences at the University of Oulu

Material Sciences Grid / Arto Teräs Slide 5(19) Hardware Acquisition ● Starting points: PC cluster, Gigabit Ethernet, Linux OS ● Joint acquisition project – Total value approximately euros (incl. VAT) – Each group needed to come up with 25% of the money, 75% funded by the Academy of Finland – RFP April-June 2004 – Vendor chosen July 2004 (Hewlett Packard) – Hardware delivered September 2004 ● Oulu decided not to participate in the joint acquisition – Preferred the same hardware as in their existing cluster

Material Sciences Grid / Arto Teräs Slide 6(19) Hardware ● Hardware of each cluster (7 sites) – Frontend: HP ProLiant DL585 (2 x Opteron GHz, 2-8 GB RAM, upgradable up to 4 CPUs and 64 GB RAM) – Computing nodes: HP ProLiant DL145 (2 x Opteron GHz, 2-8 GB RAM, GB local disk) – Admin node: HP ProLiant DL145 (1 x Opteron 1.6 GHz) – Storage: SCSI RAID-5, typically TB per site, cluster frontend used as NFS server for the nodes – Separate gbit ethernet networks for communication and NFS – Remote administration network and management boards ● University of Oulu (1 site): AMD Athlon MP based solution

Material Sciences Grid / Arto Teräs Slide 7(19) CPU Distribution ● Size of sites varies greatly ● Number of CPUs: 410 (computing nodes only, 443 total) ● Total theoretical computing power: 1.5 Tflops (CSC IBM SC 2.2 Tflops)

Material Sciences Grid / Arto Teräs Slide 8(19) Software ● Operating system: Rocks Cluster Distribution ( – Linux distribution designed for clusters, based on Red Hat Enterprise Linux 3.0 (but not an official Red Hat product) – Main developer San Diego Supercomputing Center (U.S.A) – Includes a set of open source cluster management tools ● Local batch queue system: Sun Grid Engine ( ● Grid middleware: NorduGrid ARC ( – One of the few middleware packages which has already been used in production with real users and applications

Material Sciences Grid / Arto Teräs Slide 9(19) One M-grid cluster

Material Sciences Grid / Arto Teräs Slide 10(19) System Administration ● Tasks divided between CSC and site administrators ● CSC – Maintains the OS, LRMS, Grid middleware, certain libraries centrally for all sites except Oulu – Provides tools for system monitoring, integrity checking, etc. – Separate mini cluster for testing new software releases ● Site administrators – Local applications and libraries, system monitoring, user support – Admins are typically working for the department or lab, not I.T. ● Regular meetings of administrators, support network

Material Sciences Grid / Arto Teräs Slide 11(19) Role of Grid Middleware

Material Sciences Grid / Arto Teräs Slide 12(19) Connecting Through Grids - What Changes? ● Users need personal certificates in addition to their local user account ● New interface(s) to learn, but the same can be used for heterogeneous platforms – library dependency problems are not magically solved by Grid => one reason to maintain certain uniformity within M-grid ● Both the set of users and connected resources vary dynamically ● Grids go across multiple administrative domains! ● Collaboration and social networking, not only technology

Material Sciences Grid / Arto Teräs Slide 13(19) NorduGrid Collaboration ● Started in 2001, originally for connecting resources in Scandinavia and Finland ● Later joined by groups in Estonia, Germany, Slovakia, Slovenia and Australia – Currently about 5000 CPUs total (mostly Linux clusters) ● NorduGrid ARC middleware ● Open for anyone to participate ●

Material Sciences Grid / Arto Teräs Slide 14(19) Facts on NorduGrid ● Production Grid available 24/7 since July 2002 – Serves researchers and consists of academic resources – Built from ”bottom to top”, connecting already existing resources to the Grid – Real users and applications ● Resources do NOT need to be dedicated to the Grid – Also the case in M-grid: users have both a local user account for their own cluster and a Grid identity for the whole M-grid (and beyond...) ● Possibility to take just the middleware and run your own private Grid

Material Sciences Grid / Arto Teräs Slide 15(19) M-grid Compared to Swegrid ● Common characteristics – Approximately same scale (computing power, number of sites) – NorduGrid ARC middleware ● Differences – System administration ● M-grid: Partly centralized, tasks shared between CSC and local sites ● Swegrid: Handled by the individual sites, different Linux distributions being used – Resource dedication ● M-grid: Users can submit jobs both locally and through Grid interface ● Swegrid: Local job submission not allowed or deprecated

Material Sciences Grid / Arto Teräs Slide 16(19) New Challenges for CSC ● Geographically distributed system – Systems can be fully remotely administered ● Collaborative system administration – Needs careful planning, best practises are being formed – Security and trust issues need special attention ● Linux as a high performance computing environment new to CSC – The small cluster which CSC acquired as part of the project will be extended later and opened to all CSC customers ● Grid technology

Material Sciences Grid / Arto Teräs Slide 17(19) Current Status of M-grid ● Hardware delivered, operating systems installed, acceptance tests in progress ● User environment (compilers, MPI libraries) and system monitoring usable but needs work ● NorduGrid middleware not yet installed – A few problems related to 64 bit environment and Sun Grid Engine as local batch queue system, mostly solved – To be deployed Dec Jan 2005 (estimate)

Material Sciences Grid / Arto Teräs Slide 18(19) Future ● Development of the Grid environment continues – Still large tasks ahead in security, user administration, accounting, user training and other areas ● Possibly opening our customized Rocks package to others (without central administration by CSC), contributing mathematical library packages to mainline Rocks – All software freely distributable with the exception of commercial Fortran compiler (Portland Group) ● Open questions: – Will the scientists start really using the Grid middleware? – Should HPC resources in Finland be distributed or centralized?

Material Sciences Grid / Arto Teräs Slide 19(19) M-grid Contact People ● Acquisition project: Ville Savolainen ● System administration project: Arto Teräs ● Coordinator Kai Nordlund (University of Helsinki) ● Web pages: Thank you! Questions?