© 2006 Open Grid Forum OGF HPC-Basic Profile Application Interoperability Demonstration GIN-CG.

Slides:



Advertisements
Similar presentations
GPE4UNICORE Grid Programming Environment for UNICORE
Advertisements

March 6 th, 2009 OGF 25 Unicore 6 and IPv6 readiness and IPv6 readiness
Genesis II Open Source, OGSA Implementation Genesis II: Mapping Grids into the Local File System: Access, RNS, and ByteIO Andrew Grimshaw Genesis II Team.
Adoption Status of the OGSA-BES interface Morris Riedel, Mohammad Shahbaz Memon … and many others from the UNICORE community Jülich.
Genesis II Open Source, OGSA Implementation Genesis II: Easy-to-use, Standards Based Grid Middleware Andrew Grimshaw Genesis II Team University of Virginia.
BES++ - Standards Adoption through Open Source Chris Smith Platform Computing.
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
OMII-UK Steven Newhouse, Director. © 2 OMII-UK aims to provide software and support to enable a sustained future for the UK e-Science community and its.
© 2006 Open Grid Forum OGF Interop Project update: IGE, EMI and FutureGrid GIN-CG Steve Crouch (IGE) Ismael Carrion (IGE), Chris Koeritz (GEN/FG), Shahbaz.
Web: Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software - HPC-BP.
CGW 2009 Vine Toolkit A uniform access and portal solution to existing grid middleware services P.Dziubecki, T.Kuczynski, K.Kurowski, D.Szejnfeld, D.Tarnawczyk,
UNICORE - a technical introduction NAME LOCATION DATE © UNICORE Forum e.V.
Grid and Cloud Computing UNICORE Dr. Guy Tel-Zur
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Data Grids: Globus vs SRB. Maturity SRB  Older code base  Widely accepted across multiple communities  Core components are tightly integrated Globus.
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
© 2008 Open Grid Forum Grid Standards Realizing Basic Grid Use Cases Using Existing Standards and Profiles.
Globus 4 Guy Warner NeSC Training.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Grid Engine Riccardo Rotondo
Connecting OurGrid & GridSAM A Short Overview. Content Goals OurGrid: architecture overview OurGrid: short overview GridSAM: short overview GridSAM: example.
OPEN GRID SERVICES ARCHITECTURE AND GLOBUS TOOLKIT 4
High Performance Louisiana State University - LONI HPC Enablement Workshop – LaTech University,
Software from Science for Science Steven Newhouse, Director.
1 AHE Server Deployment and Hosting Applications Stefan Zasada University College London.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Grid-enabling OGC Web Services Andrew Woolf, Arif Shaon STFC e-Science Centre Rutherford Appleton Lab.
Standards landscape and ARC development plans Péter Stefán KnowARC WP3 + NIIF.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
INFSO-RI Module 01 ETICS Overview Alberto Di Meglio.
COMP3019 Coursework: Introduction to GridSAM Steve Crouch School of Electronics and Computer Science.
JRA1/Job Submission and Monitoring Moreno Marzolla on behalf of JRA1/Job Submission Task INFN Sezione di Padova,
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
© 2008 Open Grid Forum Independent Software Vendor (ISV) Remote Computing Primer Steven Newhouse.
London e-Science Centre GridSAM A Standards Based Approach to Job Submission A. Stephen M C Gough Imperial College London A Standards Based Approach to.
Why do we need PGI? Shahbaz Memon Jülich Supercomputing Centre (JSC)
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
London e-Science Centre GridSAM Job Submission and Monitoring Web Service William Lee, Stephen McGough.
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
The NGS Grid Portal David Meredith NGS + Grid Technology Group, e-Science Centre, Daresbury Laboratory, UK
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Role, Objectives and Migration Plans to the European Middleware Initiative (EMI) Morris Riedel Jülich Supercomputing.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
The NGS Grid Portal David Meredith NGS + Grid Technology Group, e-Science Centre, Daresbury Laboratory, UK
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
The European KnowARC Project Péter Stefán, NIIF/HUNGARNET/KnowARC TNC2009, June 2009, Malaga.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
A European Grid Technology Achim Streit Jülich Supercomputing Centre (JSC)
© 2006 Open Grid Forum UNICORE – DMI Implementation Mohammad Shahbaz Memon (Co-Chair) Jülich Supercomputing Center.
Grid Middleware Questionnaire - GRIA EchoGRID Second Strategic Workshop, 31st October, 2007 CNIC, Beijing Rowland Watkins, IT Innovation Centre.
Introduction to the Application Hosting Environment
EMI Interoperability Activities
FJPPL Lyon, 13 March 2012 Sylvain Reynaud, Lionel Schwarz
OGF HPC Basic Profile Interoperability Demonstrator
Presentation transcript:

© 2006 Open Grid Forum OGF HPC-Basic Profile Application Interoperability Demonstration GIN-CG

© 2006 Open Grid Forum Overview Application Standards Implementations Demo Future

© 2006 Open Grid Forum Scientific Application

© 2006 Open Grid Forum Plasma Charge Minimization Undergraduate project Total system energy minimization of point charges around the surface of a sphere Three different applications Pre processing – generate input files Main processing – parallel distributed processing Post-processing – choose optimal solution

© 2006 Open Grid Forum Participation DEISA – UNICORE, SuSE, AMD 64-bit, 1 core NorduGrid – ARC, Debian Linux, i686, 16 core UK NGS/OMII-UK – GridSAM, Scientific Linux 4.7, AMD 64-bit, 256 core University of Virginia Campus Grid – GENESIS2, Ubuntu Linux, i686, 8 core Platform - BES++ Client

© 2006 Open Grid Forum ARC features Standards compliant WS interfaces. Modular architecture. Core components for job execution (AREX), information exchange (ISIS), data storage (Chelonia), and security. Broad range of supported platforms. Straightforward user interfaces. Developer friendliness, supports services in Java, Python and C++. Supports IPv6. Thanks to Peter Stefan

© 2006 Open Grid Forum Job Execution Client and Server The client: Modular library + CLI tools. Allows grid users accessing different middleware operated grid services: gLite CREAM2, Unicore, pre-WS ARC. Available on MS Windows, MacOS X, Solaris and Linux. The server: Implements a CE. Supports BES/JSDL/GLUE2 with ARC extensions. Conforms to numerous LRMSes. Thanks to Peter Stefan

© 2006 Open Grid Forum Genesis II – Standards Based Grid Implementation Users FIRST! Design the system from the ground up with the overriding mantra that users come first! Users don’t want to know about grids Provide a secure, cohesive system in a production system available to users today! Provide an open source, reference implementation of the OGSA and OGSA-related specifications Use standards and proto-standards available from the OGF and OGSA to provide feedback into the OGF process on various standards based on implementation experience Genesis II “Open Source, OGSA Implementation” Thanks to Mark Morgan

© 2006 Open Grid Forum (Most) everything is a file or directory Files and directories can be accessed without knowing anything about Grids, Web Services, or anything thanks to FUSE/IFS FUSE/IFS map the Grid into the file system BES resources, queues are directories “ls” to list the jobs, “cat” a job to see its state “cp” a JSDL file into the directory -> start the job up A shell script can start jobs by copying Genesis II containers are directories “ls” to see the services and porttypes IDP are files/directories IS’s are directories “cp” a query file to IS, creates result RDBMS’s will be directories The user can access all of these services without dealing with Web Services!! Genesis II “Open Source, OGSA Implementation” Thanks to Mark Morgan

© 2006 Open Grid Forum GridSAM – Use & Characteristics Currently deployed and used (amongst others): On UK National Grid Service (NGS) University College London and Imperial College London (numerous projects) Open Geospatial Consortium (OGC) 2 deployments within Chinese automotive industry Chinese Drug Discovery project Southampton University Characteristics: Easily installed and configured Client install on Windows, Mac OS X and Linux Server install on many popular Linux variants Supports PBS, DRMAA, Sun GridEngine, Condor, Globus, LSF batch systems Standards-based: HPC Basic Profile v1.0, OGSA-BES v1.0, JSDL v1.0, HPC File Staging Profile (partial) Supports FTP, SFTP, HTTP, HTTPs, GridFTP, WebDav data protocols User-focused development and Open Source… Thanks to Steve Crouch

© 2006 Open Grid Forum Open Community Development GridSAM is Open Source, Open Community Development GridSAM SourceForge project: 99.03% activity, 1 release/month SVN source code repository Developer & discuss mailing lists Contribute! t/projects/gridsam/ Thanks to Steve Crouch

© 2006 Open Grid Forum Availability and Future Developments Availability: SourceForge gridsam project With bundled Apache Tomcat/Axis/WSS4j (WS-Security): OMII-UK Campus Grid Toolkit (CGT) – automated client or server install OMII-UK Development Kit – heavily assisted client or server installation Future Developments: For end-users: Refactored documentation (with improved OGF standards coverage) Full support for HPC File Staging Profile across PBS, Condor & Fork DRMs Full support for JSDL Resource selection across PBS, Condor & Fork DRMs JSDL Parameter Sweep Extension Support for SRB and iRODS For resource owners: Packaging as standalone, manually configurable web archive (WAR) file Job submission through multiple remote SSH accounts Thanks to Steve Crouch

© 2006 Open Grid Forum UNICORE used in DEISA (European Distributed Supercomputing Infrastructure) National German Supercomputing Center NIC Gauss Center for Supercomputing (Alliance of the three German HPC centers & official National Grid Initiative for Germany in the context of EGI) SKIF-Grid (Russion-Belarus HPC Infrastructure) PRACE (European PetaFlop HPC Infrastructure) – starting- up Traditionally taking up major requirements from i.e. HPC users (i.e. MPI, OpenMP) HPC user support teams HPC operations teams …and via SourceForge Platform Grid driving High Performance Computing () Grid driving High Performance Computing (HPC) Thanks to Morris Riedel

© 2006 Open Grid Forum Guiding Principles, Implementation Strategies Open source under BSD license with software hosted on SourceForge Standards-based: OGSA-conform, WS-RF 1.2 compliant, adoption and driving of various Open Grid Forum (OGF) standards Open, extensible Service-Oriented Architecture (SOA) Interoperable with other Grid technologies Seamless, secure and intuitive following a vertical end-to-end approach Mature Security: X.509, proxy and VO support Workflow support tightly integrated while being extensible for different workflow languages and engines for domain-specific usage Application integration mechanisms on the client, services and resource level Variety of clients: graphical, command-line, API, portal, etc. Quick and simple installation and configuration Support for many operating systems (Windows, MacOS, Linux, UNIX) and batch systems (LoadLeveler, Torque, SLURM, LSF, OpenCCS) Implemented in Java to achieve platform-independence Thanks to Morris Riedel

UNICORE WS-RF hosting environment XNJS – Site 1 IDB UNICORE Atomic Services OGSA-* Service Registry Local RMS (e.g. Torque, LL, LSF, etc.) Target System Interface – Site 1 Local RMS (e.g. Torque, LL, LSF, etc.) X.509, Proxies, SOAP, WS-RF, WS-I, JSDL OGSA-ByteIO, OGSA-BES, JSDL, HPC-BP, OGSA-RUS, UR X.509, XACML, SAML, Proxies DRMAA UCC command- line client URC Eclipse-based Rich client Portal e.g. GridSphere HiLA Programming API Gateway – Site 1 UVOS VO Service External Storage USpace GridFTP, Proxies USpace XUUDB Workflow Engine Service Orchestrator XACML entity UNICORE WS-RF hosting environment XNJS – Site 2 IDB UNICORE Atomic Services OGSA-* Target System Interface – Site 2 XUUDB XACML entity Gateway – Site 2 CIS Info Service OGSA-RUS, UR, GLUE 2.0 Grid services hosting job incarnation web service stack data transfer to external storages authorization authentication scientific clients and applications central services running in WS-RF hosting environments Gateway Thanks to Morris Riedel

© 2006 Open Grid Forum Challenges Time - 4 days to do core work Learning curve – middlewares, demo requirements Impacted many decisions, compromise! Select best time/benefit approach Good team coordination, high communication critical Exploitation of standards support for each middleware probably not complete! Compatibility of BES++ client and middleware Client has some limitations – e.g. no delegation support (ARC), no support for full EPRs (Genesis II) Middleware support for standards and data differs… how to support? Approach was hybrid of ‘highest common denominator’ of support, and customisation JSDL – generate middleware-oriented rendering based on template Data – use what we could get working for each middleware Support for full EPRs was blocker for Genesis II

© 2006 Open Grid Forum Standards/Data Protocols/Security Supported in Demo (at the moment!) Standards: HPC Basic Profile v1.0 OGSA BES (Basic Execution Service) v1.0 JSDL (Job Submission Description Language) v1.0 HPC Profile Application Extension v1.0 – ARC, GridSAM HPC File Staging Profile – UNICORE only Data protocols: UNICORE, ARC – ftp GridSAM – GridFTP Security: Direct middleware -> certificate CA trust (just import CAs)

© 2006 Open Grid Forum Compute Related Standards Job Management OGSA-BES (GFD 108) HPC Domain Specific Profile HPC Basic Profile (GFD 114) Architecture OGSA EMS Scenarios (GFD 106) Use Cases Grid Scheduling Use Cases (GFD 64) Education ISV Primer (GFD 141) Agreement WS-Agreement (GFD 107) Programming Interface DRMAA (GFD 22/133) Programming Interface SAGA (GFD 90) Accounting Usage Record (GFD 98) Information GLUE Schema 2.0 (GFD. 147) Job Definition File Transfer HPC File Staging (GFD 135) Job Description JSDL (GFD 56/136) Application Description HPC Application (GFD 111) Application Description SPMD Application (GFD 115) Job Parameterization Parameter Sweep (GFD. 149) Extend Uses Produces Describes Supports Profiles

© 2006 Open Grid Forum Compute Related Standards Job Management OGSA-BES (GFD 108) HPC Domain Specific Profile HPC Basic Profile (GFD 114) Architecture OGSA EMS Scenarios (GFD 106) Use Cases Grid Scheduling Use Cases (GFD 64) Education ISV Primer (GFD 141) Agreement WS-Agreement (GFD 107) Programming Interface DRMAA (GFD 22/133) Programming Interface SAGA (GFD 90) Accounting Usage Record (GFD 98) Information GLUE Schema 2.0 (GFD. 147) Job Definition File Transfer HPC File Staging (GFD 135) Job Description JSDL (GFD 56/136) Application Description HPC Application (GFD 111) Application Description SPMD Application (GFD 115) Job Parameterization Parameter Sweep (GFD. 149) Extend Uses Produces Describes Supports Profiles

JSDL Template e input.txt output.txt stdout.txt stderr.txt input.txt overwrite output.txt overwrite false stdout.txt overwrite false stderr.txt overwrite false

3. Upload input files 3 How it Fits Together… BES++ Client BES++ Client UNICORE GridSAM ARC FTP GridFTP FTP Client Job Service Data Service Minem Application minem- interop.pl MyProxy Minem Security Service 1. Create Minem input files 1 2. Generate JSDLs from template 2 4. Submit JSDLs across middlewares 4 6. Download input files 6 7. Select best result 7 8. Generate/upload image to web server 8 5. Monitor jobs until completion 5

© 2006 Open Grid Forum Future Support full EPRs for Genesis II Include CREAM-BES within interop demo Harmonise standards support across all middlewares HPC Basic Profile Application Extension (currently supported by GridSAM, ARC) HPC File Staging Profile (currently supported by UNICORE) This would need middleware development! Support for single data protocol across all middlewares? HPC File Staging Profile only mandates support for one protocol from ftp, http, scp (does not mandate a specific one, or all three) Issues with each e.g. GridFTP: would need support in Genesis II, production-level support in UNICORE, and delegation support in BES++ client for ARC Again, middleware development required! We already know ‘data’ is a next big Grid challenge – protocol support a key part of this challenge

© 2006 Open Grid Forum Thanks Demo development – Steve Crouch, OMII-UK Core Interop assistance ARC Peter Stefan - NorduGrid Genesis 2 Mark Morgan - UVa GridSAM Justin Bradley – OMII-UK Richard Boardman – OMII-UK UNICORE Shahbaz Memon –Juelich BES++ Chris Smith - Platform Resource provision Andrew Grimshaw - UVa Balazs Konya - NorduGrid Matteo Turilli – UK NGS Morris Riedel – Juelich