A European Grid Technology Achim Streit Jülich Supercomputing Centre (JSC)

Slides:



Advertisements
Similar presentations
GPE4UNICORE Grid Programming Environment for UNICORE
Advertisements

March 6 th, 2009 OGF 25 Unicore 6 and IPv6 readiness and IPv6 readiness
Adoption Status of the OGSA-BES interface Morris Riedel, Mohammad Shahbaz Memon … and many others from the UNICORE community Jülich.
Forschungszentrum Jülich in der Helmholtz-Gesellschaft December 2006 A European Grid Middleware Achim Streit
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
OMII-UK Steven Newhouse, Director. © 2 OMII-UK aims to provide software and support to enable a sustained future for the UK e-Science community and its.
Security Daniel Mallmann MWSG meeting Amsterdam December 2005.
UNICORE - a technical introduction NAME LOCATION DATE © UNICORE Forum e.V.
Grid and Cloud Computing UNICORE Dr. Guy Tel-Zur
The UNICORE GRID Project Karl Solchenbach Gesellschaft für Parallele Anwendungen und Systeme mbH Pallas GmbH Hermülheimer Straße 10 D Brühl, Germany.
Grid Programming Environment (GPE) Grid Summer School, July 28, 2004 Ralf Ratering Intel - Parallel and Distributed Solutions Division (PDSD)
Data Grids: Globus vs SRB. Maturity SRB  Older code base  Widely accepted across multiple communities  Core components are tightly integrated Globus.
Lisbon, August A. Streit DEISA Forschungszentrum Jülich in der Helmholtz-Gesellschaft Achim Streit
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
Globus 4 Guy Warner NeSC Training.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
Forschungszentrum Jülich in der Helmholtz-Gesellschaft Grid Computing at NIC September 2005 Achim Streit + Team
- 1 - Grid Programming Environment (GPE) Ralf Ratering Intel Parallel and Distributed Solutions Division (PDSD)
EUFORIA FP7-INFRASTRUCTURES , Grant Scientific Workflows Kepler and Java API 4 HPC/GRID Hands on tutorial - ITM Meeting 2009 Michal Owsiak.
Experiences with using UNICORE in Production Grid Infrastructures DEISA and D-Grid Michael Rambadt
1Forschungszentrum Jülich  11:00 – 11:20UNICORE – A European Grid Middleware (20 min) Achim Streit (FZJ)  11:20 – 11:30Demonstration of UNICORE in DEISA.
Forschungszentrum Jülich UNICORE and EUROGRID: Grid Computing in EUROPE Dietmar Erwin Forschungszentrum Jülich GmbH TERENA Networking Conference 2001 Antalya,
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
© 2008 Open Grid Forum Independent Software Vendor (ISV) Remote Computing Primer Steven Newhouse.
A Novel Approach to Workflow Management in Grid Environments Frank Berretz*, Sascha Skorupa*, Volker Sander*, Adam Belloum** 15/04/2010 * FH Aachen - University.
Why do we need PGI? Shahbaz Memon Jülich Supercomputing Centre (JSC)
London e-Science Centre GridSAM Job Submission and Monitoring Web Service William Lee, Stephen McGough.
Forschungszentrum Jülich in der Helmholtz-Gesellschaft Experiences with using UNICORE in Production Grid Infrastructures DEISA and D-Grid Michael Rambadt.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
Interoperability in OMII – Europe (using the new standard compliant SAML-based VOMS to handle attribute-based authz.) Morris Riedel (FZJ), Valerio Venturi.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
EUROGRID – An Integrated User–Friendly Grid System Hans–Christian Hoppe, Karl Solchenbach A Member of the ExperTeam Group Pallas GmbH Hermülheimer Straße.
Basics of Grid Middleware – 2 (with an introduction to OMII-Europe) Mike Mineter NeSC-TOE.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Research Infrastructures Information Day Brussels, March 25, 2003 Victor Alessandrini IDRIS - CNRS.
Role, Objectives and Migration Plans to the European Middleware Initiative (EMI) Morris Riedel Jülich Supercomputing.
Panel “Making real large-scale grids for real money-making users: why, how and when?” August 2005 Achim Streit Forschungszentrum Jülich in der.
Easy Access to Grid infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) Dr. Mathias Stuempert (KIT-SCC, Karlsruhe) EGEE User Forum 2008 Clermont-Ferrand,
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
BalticGrid-II Project EGEE UF’09 Conference, , Catania Partner’s logo Framework for Grid Applications Migrating Desktop Framework for Grid.
Introduction to UNICORE Morris Riedel, Forschungszentrum Jülich (FZJ), Germany OMII – Europe Training, Edinburgh, UK 11th July 2007 – 12th July
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks gLite – UNICORE interoperability Daniel Mallmann.
14, Chicago, IL, 2005 Science Gateways to DEISA Motivation, user requirements, and prototype example Thomas Soddemann, RZG, Germany.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
EGEE Workshop on Management of Rights in Production Grids Paris, June 19th, 2006 Victor Alessandrini IDRIS - CNRS DEISA : status, strategies, perspectives.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
Monterey HPDC Workshop Experiences with MC-GPFS in DEISA Andreas Schott
1 MSWG, Amsterdam, December 15, 2005 DEISA security Jules Wolfrat SARA.
European and Chinese Cooperation on Grid CNGrid GOS China National Grid System Software Zhiwei Xu, Taoying Liu ICT, CAS.
© 2006 Open Grid Forum UNICORE – DMI Implementation Mohammad Shahbaz Memon (Co-Chair) Jülich Supercomputing Center.
Piotr Bała, Marcin Radecki, Krzysztof Benedyczak
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
GWE Core Grid Wizard Enterprise (
EMI Interoperability Activities
DEISA : integrating HPC infrastructures in Europe Prof
Building Components for Grid Interoperability
Study course: “Computing clusters, grids and clouds” Andrey Y. Shevel
DGI: The D-Grid Infrastructure
Interoperability & Standards
GLOBUS ACCOUNTING USING GRID-SAFE - DEMO
Grid Engine Riccardo Rotondo
Grid Engine Diego Scardaci (INFN – Catania)
Presentation transcript:

A European Grid Technology Achim Streit Jülich Supercomputing Centre (JSC)

2  UNiform Interface to COmputing Resources  seamless, secure, and intuitive  Initial development started in two German projects funded by the German ministry of education and research (BMBF)  08/1997 – 12/1999: UNICORE project  Results: well defined security architecture with X.509 certificates, intuitive GUI, central job supervisor based on Codine from Genias  1/2000 – 12/2002: UNICORE Plus project  Results: implementation enhancements (e.g. replacement of Codine by custom NJS), extended job control (workflows), application specific interfaces (plugins)  Continuous development since 2002 in several EU projects  Open Source community development since Summer 2004 History Lesson

 More than a decade of German and European research & development and infrastructure projects  Any many others, e.g. Projects UNICORE UNICORE Plus EUROGRIDGRIPGRIDSTARTOpenMolGRID UniGrids VIOLADEISANextGRIDCoreGRIDD-Grid IPEGEE-IIOMII-EuropeA-WAREChemomentumeDEISAPHOSPHORUSD-Grid IP 2SmartLM PRACE D-MON DEISA2ETICS2

4 – Grid driving HPC  Used in  DEISA (European Distributed Supercomputing Infrastructure)  National German Supercomputing Center NIC  Gauss Center for Supercomputing (Alliance of the three German HPC centers)  PRACE (European PetaFlop HPC Infrastructure) – starting-up  But also in non-HPC-focused infrastructures (i.e. D-Grid)  Taking up major requirements from i.e.  HPC users  HPC user support teams  HPC operations teams

5  Open source (BSD license)  Open developer community on SourceForge  Contribution with your own developments easily possible  Design principles  Standards: OGSA-conform, WS-RF compliant  Open, extensible, interoperable  End-to-End, seamless, secure and intuitive  Strong security (X.509, proxy and VO support)  Excellent workflow and application support  Easy to use clients (graphical, commandline, portal)  Easy installation and configuration  Support for many operating and batch systems  100% Java 5 –

6 UNICORE WS-RF hosting environment XNJS Architecture Architecture IDB UNICORE Atomic Services OGSA-* XACML entity emerging standard interfaces Grid services hosting job incarnation & authorization authentication scientific clients and applications parallel scientific jobs of multiple end-users on target systems UNICORE WS-RF hosting environment XNJS IDB UNICORE Atomic Services OGSA-* XACML entity Gateway UNICORE WS-RF hosting environment Service Registry Gateway ByteIO BES RUS ByteIO BES RUS XACML HPC-P GPE application client command- line client Eclipse- based client Portal client, e.g. GridSphere WS-ISOAPJSDL XUUDB SAML- VOMS X.509 UR JSDL HPC-P UR JSDL WS-RF SAML X.509 Local RMS (e.g. Torque, LL, LSF, etc.) Target System Interface Local RMS (e.g. Torque, LL, LSF, etc.) Target System Interface DRMAA

7 Standards in  Security  Full X.509 certificates as base line, XACML based access control  Support for SAML-based VOMS & X.509 proxies in development  Information system, monitoring, accounting  GLUE 2.0 information service in development (strong interaction with the GLUE WG)  OGSA-RUS for accounting in development (incl. UR for storing)  Job management  OGSA-BES, HPC-P: creation, monitoring and control of jobs  job definition compliant with JSDL (+ JSDL HPC ext.)  DRMAA communication to local resource manager for job scheduling  Data management  Fully OGSA-ByteIO compliant for site-to-site transfers  Web-Services (WS-RF 1.2, SOAP, WS-I) stack !

8 Architecture: Focus on Workflow job execution and data storage authentication scientific clients and applications parallel scientific jobs of multiple end-users on target systems command- line client Eclipse- based client Local RMS (e.g. Torque, LL, LSF, etc.) Portal client, e.g. GridSphere UNICORE hosting env. UNICORE Workflow Engine UNICORE hosting env. Service Registry UNICORE Tracing Service workflow execution UNICORE hosting env. UNICORE Service Orchestrator brokering and job management Resource Information Service UNICORE hosting env. XNJS + TSI UNICORE Atomic Services OGSA-* Local RMS (e.g. Torque, LL, LSF, etc.) Gateway UNICORE hosting env. XNJS + TSI UNICORE Atomic Services OGSA-*

9 Status  6.0 released August 10, 2007  Web services / WS-RF core  Basic services (registry, jobs, files)  XNJS job execution management engine  Graphical GPE Application client  Flexible security framework using X.509, SAML, XACML  Standards: WS-RF 1.2, JSDL 1.0, OGSA ByteIO  Extensible Command-line client and scripting tools  released December 23, 2007  Fast https-based file transfer, bug fixes  6.1 released March 20, 2008  Enhanced workflow support  Rich client based on Eclipse  Interoperability components

10 Using GPE Application Client UCC command-line client Rich Client for Workflows (based on Eclipse) Programming API soon to be released

11 Rich Client based on Eclipse

12 GPE Application Client

13 UCC – Commandline Client

14 Accessing UCC through emacs

15 tar.gz based installer is also available Seamless installation of server components

16 … and the UNICORE services even run under Windows XP

17 UNICORE in use some examples

18 Usage in D-Grid  Core D-Grid sites committing parts of their existing resources to D-Grid  Approx. 700 CPUs  Approx. 1 PByte of storage  UNICORE is installed and used  Additional Sites received extra money from the BMBF for buying compute clusters and data storage  Approx CPUs  Approx. 2 PByte of storage  UNICORE (as well as Globus and gLite) is installed as soon as systems are in place LRZ DLR-DFD

19  Consortium of leading national HPC centers in Europe  Deploy and operate a persistent, production quality, distributed, heterogeneous HPC environment  UNICORE as Grid Middleware  On top of DEISA’s core services:  Dedicated network  Shared file system  Common production environment at all sites  Used e.g. for workflow applications IDRIS – CNRS (Paris, France), FZJ (Jülich, Germany), RZG (Garching, Germany), CINECA (Bologna, Italy), EPCC ( Edinburgh, UK), CSC (Helsinki, Finland), SARA (Amsterdam, NL), HLRS (Stuttgart, Germany), BSC (Barcelona, Spain), LRZ (Munich, Germany), ECMWF (Reading, UK) Distributed European Infrastructure for Supercomputing Applications

20  Provide key software components for building e-infrastructures  Initial focus on providing common interfaces and integration of major Grid software infrastructures  OGSA-DAI, VOMS, GridSphere, OGSA-BES, OGSA-RUS  UNICORE, gLite, Globus Toolkit, CROWN  Infrastructure Integration (e.g. Secure Job Submissions) Interoperability and Usability of Grid Infrastructures

21  Provide an integrated Grid solution for workflow-centric, complex applications with a focus on data, semantics and knowledge  Provide decision support services for risk assessment, toxicity prediction, and drug design  End user focus  ease of use  domain specific tools  “hidden Grid”  Based on UNICORE 6 Grid Services based Environment to enable Innovative Research

22  About 450 users in 200 research projects  ¼ of them uses UNICORE  Access via UNICORE to  IBM p690 eSeries Cluster (1312 CPUs, 8.9 TFlops)  JUBL (16384 CPUs, 45.8 TFlops)  SoftComp Cluster (264 CPUs, 1 TFlops)  JUGGLE (176 cores, 845 GFLops)  Cray XD1 (120 CPUs + FPGAs, 528 GFlops) Usage in the National German HPC center NIC

23 join the developer community, software, source code, documentation, tutorials, mailing lists, community links, and more: