OGSA Data Architecture Scenarios

Slides:



Advertisements
Similar presentations
© 2006 Open Grid Forum GHPN-RG Status update co-chairss:Cees de Laat Dimitra Simeonidou GGF22, Boston, February 2008.
Advertisements

© 2006 Open Grid Forum JSDL 1.0: Parameter Sweeps OGF 23, June 2008, Barcelona, Spain.
© 2006 Open Grid Forum Network Services Interface OGF30: Connection Services Guy Roberts, 27 th Oct 2010.
© 2006 Open Grid Forum Network Services Interface Introduction to NSI Guy Roberts.
© 2006 Open Grid Forum JSDL 1.0: Parameter Sweeps: Examples OGF 22, February 2008, Cambridge, MA.
© 2006 Open Grid Forum OGF19 Federated Identity Rule-based data management Wed 11:00 AM Mountain Laurel Thurs 11:00 AM Bellflower.
© 2007 Open Grid Forum JSDL-WG Session OGF27 – General Session 10:30-12:00, 14 October 2009 Banff, Canada.
©2010Open Grid Forum OGF28 OGSA-DMI Status Chairs: Mario Antonioletti, EPCC Stephen Crouch, Southampton Shahbaz Memon, FZJ Ravi Madduri, UoC.
© 2006 Open Grid Forum Joint Session on Information Modeling for Computing Resources OGF 20 - Manchester, 7 May 2007.
© 2007 Open Grid Forum JSDL-WG Session OGF21 – Activity schema session 17 October 2007 Seattle, U.S.
© 2006 Open Grid Forum OGSA Next Steps Discussion Providing Value Beyond the Specifications.
© 2008 Open Grid Forum Resource Selection Services OGF22 – Boston, Feb
© 2006 Open Grid Forum Network Services Interface OGF29: Working Group Meeting Guy Roberts, 19 th Jun 2010.
© 2007 Open Grid Forum JSDL-WG Session 1 OGF25 – General Session 11:00-12:30, 3 March 2009 Catania.
© 2006 Open Grid Forum JSDL Optional Elements OGF 24 Singapore.
© 2007 Open Grid Forum Data/Compute Affinity Focus on Data Caching.
© 2006 Open Grid Forum Joint Session on Information Modeling for Computing Resources (OGSA Modeling Activities) OGF 21 - Seattle, 16 October 2007.
© 2006 Open Grid Forum GGF18, 13th September 2006 OGSA Data Architecture Scenarios Dave Berry & Stephen Davey.
1 ©2013 Open Grid Forum OGF Working Group Sessions Security Area – FEDSEC Jens Jensen, OGF Security Area.
© 2006 Open Grid Forum DCI Federation Protocol BoF Alexander Papaspyrou, TU Dortmund University Open Grid Forum March 15-18, 2010, Munich, Germany.
© 2006 Open Grid Forum Service Level Terms Andrew Grimshaw.
© 2010 Open Grid Forum Standards All Hands Meeting OGF28, München, March 2010.
© 2006 Open Grid Forum Network Services Interface OGF 32, Salt Lake City Guy Roberts, Inder Monga, Tomohiro Kudoh 16 th July 2011.
© 2010 Open Grid Forum OCCI Status Update Alexander Papaspyrou, Andy Edmonds, Thijs Metsch OGF31.
© 2007 Open Grid Forum JSDL-WG Session OGF22 – General Session (11:15-12:45) 25 February 2008 Boston, U.S.
© 2006 Open Grid Forum BES 1.1 Andrew Grimshaw. © 2006 Open Grid Forum 2 OGF IPR Policies Apply “ I acknowledge that participation in this meeting is.
© 2006 Open Grid Forum FEDSEC-CG Andrew Grimshaw and Jens Jensen.
© 2006 Open Grid Forum Activity Instance Schema Philipp Wieder (with the help of the JSDL-WG) Activity Instance Document Schema BoF Monday, 25 February,
© 2006 Open Grid Forum Network Services Interface OGF 33, Lyon Guy Roberts, Inder Monga, Tomohiro Kudoh 19 th Sept 2011.
© 2006 Open Grid Forum HPC Job Delegation Best Practices Grid Scheduling Architecture Research Group (GSA-RG) May 26, 2009, Chapel Hill, NC, US.
© 2006 Open Grid Forum GridRPC Working Group 15 th Meeting GGF22, Cambridge, MA, USA, Feb
OGSA-RSS Face-to-Face Meeting Sunnyvale, CA, US Aug 15-16, 2005.
© 2006 Open Grid Forum Network Services Interface CS Errata Guy Roberts, Chin Guok, Tomohiro Kudoh 29 Sept 2015.
© 2006 Open Grid Forum OGSA-WG: EGA Reference Model GGF18 Sept. 12, 4-5:30pm, #159A-B.
© 2006 Open Grid Forum Remote Instrumentation Services in Grid Environment Introduction Marcin Płóciennik Banff, OGF 27 Marcin Płóciennik.
© 2006 Open Grid Forum NML Progres OGF 28, München.
© 2007 Open Grid Forum OGF Management Area Meeting OGF20 7 May, am-12:30pm Manchester, UK.
© 2007 Open Grid Forum Status Reviews and Plans Production Grid Infrastructure (PGI) - WG Morris Riedel et al. Juelich Supercomputing Centre PGI Co-Chair.
© 2006 Open Grid Forum VOMSPROC WG OGF36, Chicago, IL, US.
© 2007 Open Grid Forum OGF20 Levels of the Grid Workflow Interoperability OGSA-WG F2F meeting Adrian Toth University of Miskolc NIIF 11 th May, 2007.
© 2006 Open Grid Forum 1 Application Contents Service (ACS) ACS-WG#1 Monday, September 11 10:30 am - 12:00 am (158A-B) ACS-WG#2 Wednesday, September 13.
© 2006 Open Grid Forum Network Services Interface 2015 Global LambdaGrid Workshop Prague Guy Roberts, Chin Guok, Tomohiro Kudoh 28 Sept to 1 Oct 2015.
© 2008 Open Grid Forum Production Grid Infrastructure WG State Model Discussions PGI Team.
OGSA Data Architecture WG Data Transfer Session Allen Luniewski, IBM Dave Berry, NESC.
© 2007 Open Grid Forum JSDL-WG Session OGF26 – General Session 11:00-12:30, 28 May 2009 Chapel Hill, NC.
Network Services Interface
GGF Intellectual Property Policy
Welcome and Introduction
OGSA EMS Session OGF19 OGSA-WG session #3 30 January, :30pm
RISGE-RG use case template
OGSA Data Architecture WG Data Transfer Discussion
GridRPC Working Group 13th Meeting
Grid Resource Allocation Agreement Protocol
OGF session PMA, Florence, 31 Jan 2017.
Network Services Interface
Network Services Interface Working Group
OGSA-Workflow OGSA-WG.
Information Model, JSDL and XQuery: A proposed solution
OGSA Data Architecture Scenarios
Network Measurements Working Group
WS Naming OGF 19 - Friday Center, NC.
Activity Delegation Kick Off
Network Services Interface Working Group
OGSA-RSS-WG EPS Discussion.
Introduction to OGF Standards
OGSA Data Architecture
Proposed JSDL Extension: Parameter Sweeps
RNS Interoperability and File Catalog Standardization
UR 1.0 Experiences OGF 24, Singapore.
OGF 40 Grand BES/JSDL Andrew Grimshaw Genesis II/XSEDE
Presentation transcript:

OGSA Data Architecture Scenarios Dave Berry & Stephen Davey GGF18, 13th September 2006

OGF IPR Policies Apply “I acknowledge that participation in this meeting is subject to the OGF Intellectual Property Policy.” Intellectual Property Notices Note Well: All statements related to the activities of the OGF and addressed to the OGF are subject to all provisions of Appendix B of GFD-C.1, which grants to the OGF and its participants certain licenses and rights in such statements. Such statements include verbal statements in OGF meetings, as well as written and electronic communications made at any time or place, which are addressed to: the OGF plenary session, any OGF working group or portion thereof, the OGF Board of Directors, the GFSG, or any member thereof on behalf of the OGF, the ADCOM, or any member thereof on behalf of the ADCOM, any OGF mailing list, including any group list, or any other list functioning under OGF auspices, the OGF Editor or the document authoring and review process Statements made outside of a OGF meeting, mailing list or other function, that are clearly not intended to be input to an OGF activity, group or function, are not subject to these provisions. Excerpt from Appendix B of GFD-C.1: ”Where the OGF knows of rights, or claimed rights, the OGF secretariat shall attempt to obtain from the claimant of such rights, a written assurance that upon approval by the GFSG of the relevant OGF document(s), any party will be able to obtain the right to implement, use and distribute the technology or works when implementing, using or distributing technology based upon the specific specification(s) under openly specified, reasonable, non-discriminatory terms. The working group or research group proposing the use of the technology with respect to which the proprietary rights are claimed may assist the OGF secretariat in this effort. The results of this procedure shall not affect advancement of document, except that the GFSG may defer approval where a delay may facilitate the obtaining of such assurances. The results will, however, be recorded by the OGF Secretariat, and made available. The GFSG may also direct that a summary of the results be included in any GFD published containing the specification.” OGF Intellectual Property Policies are adapted from the IETF Intellectual Property Policies that support the Internet Standards Process. IPR Notices Note Well for OGF meetings 2

Contents Overview Five sample scenarios Data Pipelining Data Storage Data Replication Data Staging (Joint OGSA Data + EMS data staging scenario) Personal Data Service 3

Two Informational Documents OGSA Data Architecture 70+ pages Describes services and their interfaces OGSA Data Scenarios 50+ pages Describes how the services can be composed to address particular scenarios 4

Scenarios document Example scenarios of a generic nature Illustrates how the services and interfaces described in the OGSA Data Architecture document can be put together in a selection of typical data scenarios. Not a use case document generating requirements. 5

Current Scope Files and databases (& storage) Not streams, sessions, … Services and interfaces Storage, Access, Transfer Replication, Caching, Federation, Metadata catalogues Cross-cutting themes Security, Policies, … Part of the bigger OGSA picture E.g. Naming, Workflow, Transactions, Scheduling, Provisioning, … 6

Progress since GGF16 More scenarios Focus on interfaces E.g. Provenance, Grid File System Focus on interfaces F2F meeting end of August More integration Particularly between scenarios and architecture document Also raising some issues from individual chapters to cross-cutting concerns 7

Scenarios Completed Data Storage Data Replication Data Staging Store file data in a Grid Storage Service and retrieve it later Data Replication Maintain copies of data at different locations (for availability or performance) Data Staging Move data in preparation for performing operations on that data Data Pipelining Connect the output of one service to the input of another 8

In Progress Data Integration – bringing the data that you require together from disparate sources. Personal Data Service – the organising of an individual’s data to allow them access to it from many different locations. Data Discovery – discover data; register data/metadata. Data Provenance – the provenance of a piece of data is the process that led to that piece of data; the history of ownership of an object. Grid File System – provide a virtual file system in a Grid environment. 9

Data Pipelining Rendering Service 1. Submit job. 2. Store results. 3. Transfer results. Customer 1 Data Access Service Completed Animations Data Transfer Service 4. Return results. Customer 2 – 3rd Party Delivery In this scenario results may be streamed to the customers as soon as they are completed or they could be sent as a whole in one batch on final completion of the rendering job. Gives examples of 3rd Party Delivery and Data Streaming. Example interfaces between the consumer/client, the services and the data storage elements with references to the relevant sections of the OGSA Data Architecture document [OGSA Data Arch]. The interfaces in the different steps of the scenario are as follows: 1. Customer 1 (Designer) submits a rendering job to the Rendering Service – section 3.7 “Reservation and scheduling” and Execution Management Services (EMS) in OGSA. The details of the job would be defined in say JSDL [JSDL]. 2. Completed animation is stored to a common storage device – section 3.7. It is assumed that detail on how and where to store the results would be controlled by the Execution Management Services. 3. Rendering Service transfers the completed animations from the Data Service to the Visualization Service using the Data Transfer Service – section 6 “Data Transfer”. 4. The Visualization Service displays the animations to the clients (Designer & Reviewer/Customer) in an agreed format – most likely an application specific interface. 3. Transfer results. Visualisation Service 10

Data Storage – Bringing data online Data Storage Service Customer 1. Make files online. 1. Make online. Nearline Storage 1. Make online. 3. Retire to nearline. 3. Retire to nearline. 2. Transfer files. The interfaces in the different steps of the scenario are as follows: 1. The files are made available online by the Data Storage Service – section 8.8 “SRM Interfaces” 2. The data are read through an appropriate interface, such as the Transfer Service – section 6 “Data Transfer”. 3. The online attribute of the files may expire and they can be retired to nearline storage – section 8.8 “SRM Interfaces”. Transfer Service Online Storage 2. Transfer files. Storage Devices 11

Data Replication – 1 Data Access Service 1 Data Storage 4. Access data Customer 1 1a. Register data 2. Transfer copies 5. Notify Data Transfer Service Customer 2 Replication Service 6. Update 2. Transfer copies Example interfaces between the consumer/client, the services and the data storage elements with references to the relevant sections of the OGSA Data Architecture document [OGSA Data Arch]. The interfaces in the different steps of the scenario are as follows: 1. A data resource is: a. Registered with a replicating data service (details such as creation time, access control, etc. would also be included) – section 10.2 “Creating Replicas”. b. The replication service enters the data resource into a replica catalogue – section 10.3 “Discovering Replicas”. 2. The replication service uses a data transfer service to move copies of this data to different locations and tracks which data is kept where – section 6 “Data Transfer”. 3. Clients access the catalogue to find the data resource, or to return a list of resources that satisfy certain Quality of Service (QoS) requirements – section 10.3 “Discovering Replicas”. 4. Clients then access the stores either directly or indirectly – section 7 “Data Access”, i.e. any suitable data access interface such as DAIS or ByteIO [ByteIO]. 5. Changes to the data are notified to the replication service – section 3.4 “Notification of Events”. 6. Updates then occur between the data services to synchronize the replicas – section 6 “Data Transfer”. Here it has been assumed that a catalogue as described in section 12 “Metadata Catalogue & Registries” has been used. But on steps 1b) and 3, if a DAIS exposed database was employed for example, then DAIS [WS-DAI] updates or queries could also be performed. 2. Transfer copies 3. Find data 1b. Publish Data Storage 2 Data Access Service 2 Registry Service 12

Data Replication – 2 Data Storage 1 Data Access Service 1 4. Access data 5. Notify Customer 1 Data Service 2. Transfer copies Repli-cation Service Data Transfer Service 6. Update 2. Transfer copies Customer 2 2. Transfer copies In this case the primary Data Service virtualises & simplifies all of the data replication functionality behind it. 1. Register Data Storage 2 3. Find data Data Access Service 2 Replica Catalogue Service 13

Joint OGSA Data + EMS Scenario The steps of this simple scenario are as follows: Submit job to BES container. (JSDL contains execution & data staging info). Use data transfer service to do the required data staging. Run the executable on the BES container with the input data. Stage result output data back to Data Service 1. Delete staged input data at BES container. Delete staged output data BES container. 14

Data Staging 6. Delete output data. Input Data Output Data (copy) Data Transfer Service 2a. 4a. Data Service 1 2b. Transfer input data. 2a. 4a. 4b. Transfer output data. 4a. Stage output data. Client 2a. Stage input data. 1. Submit JSDL script. Data Service 2 BES Container BES Container: 3. Run executable & save resulting output data. Output Data Input Data (copy) 5. Delete input data (copy). 6. Delete output data. 15

Global Name Resolver Service Personal Data Service Registry Service Data Service 1 1. Locate data. Customer 1 (site 1) 2. Create named space. Local Cache Service 1 2. Create. 4. Use named space. Data Service 2 5. Update. Personal Data Service Index Data Service 3 7. Update. Customer at site 1 locates data by using, for example, a Registry Service. The Customer interacts with the Personal Data Service (via their Local Cache Service) in order to create a personal collection of data (a named space). The Personal Data Service uses a Global Name Resolver Service in order to name the customer’s collection of data. The Customer at site 1 uses Local Cache Service 1 in order to build and modify their personal collection of data. On terminating the session at site 1 the Local Cache Service 1 updates the Personal Data Service. Customer moves to site 2 and starts working, wanting to use, change and add to their personal collection of data. This is done via Local Cache Service 2. On terminating the session at site 2 the Local Cache Service 2 updates the Personal Data Service. Index Customer 1 (site 2) 6. Use named space. Local Cache Service 2 3. Name collection. Global Name Resolver Service Index 16

Questions?

Full Copyright Notice Copyright (C) Open Grid Forum 2006. All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. The limited permissions granted above are perpetual and will not be revoked by the OGF or its successors or assignees. OGF Full Copyright Notice if necessary 18