Thank you for your kind introduction.

Slides:



Advertisements
Similar presentations
© 2006 Open Grid Forum JSDL 1.0: Parameter Sweeps OGF 23, June 2008, Barcelona, Spain.
Advertisements

© 2006 Open Grid Forum Ellen Stokes, IBM Michel Drescher, Fujitsu Information Model, JSDL and XQuery: A proposed solution OGF-19 Chapel Hill, NC USA.
© 2006 Open Grid Forum Network Services Interface Introduction to NSI Guy Roberts.
© 2006 Open Grid Forum JSDL 1.0: Parameter Sweeps: Examples OGF 22, February 2008, Cambridge, MA.
© 2006 Open Grid Forum OGF19 Federated Identity Rule-based data management Wed 11:00 AM Mountain Laurel Thurs 11:00 AM Bellflower.
© 2007 Open Grid Forum JSDL-WG Session OGF27 – General Session 10:30-12:00, 14 October 2009 Banff, Canada.
© 2006 Open Grid Forum Joint Session on Information Modeling for Computing Resources OGF 20 - Manchester, 7 May 2007.
© 2007 Open Grid Forum JSDL-WG Session OGF21 – Activity schema session 17 October 2007 Seattle, U.S.
Oct 15 th, 2009 OGF 27, Infrastructure Area: Status of FVGA-WG Status of Firewall Virtualization for Grid Applications - Working Group
© 2008 Open Grid Forum Resource Selection Services OGF22 – Boston, Feb
© 2006 Open Grid Forum Network Services Interface OGF29: Working Group Meeting Guy Roberts, 19 th Jun 2010.
© 2006 Open Grid Forum JSDL Optional Elements OGF 24 Singapore.
© 2006 Open Grid Forum Joint Session on Information Modeling for Computing Resources (OGSA Modeling Activities) OGF 21 - Seattle, 16 October 2007.
© 2006, 2007 Open Grid Forum Michel Drescher, FujitsuOGF-20, Manchester, UK Andreas Savva, FujitsuOGF-21, Seattle, US (update) Extending JSDL 1.0 with.
© 2009 Open Grid Forum Usage Record Working Group Alignment and Production Profile.
1 ©2013 Open Grid Forum OGF Working Group Sessions Security Area – FEDSEC Jens Jensen, OGF Security Area.
© 2007 Open Grid Forum Data Grid Management Systems: Standard API - community development Arun Jagatheesan, San Diego Supercomputer Center & iRODS.org.
© 2006 Open Grid Forum Service Level Terms Andrew Grimshaw.
Peter Ziu Northrop Grumman ACS-WG Grid Provisioning Appliance Concept GGF13, March 14, 2005 (Revised 8/4/2005)
© 2006 Open Grid Forum Network Services Interface OGF 32, Salt Lake City Guy Roberts, Inder Monga, Tomohiro Kudoh 16 th July 2011.
© 2007 Open Grid Forum Enterprise Best (Community) Practices Workshop OGF 22 - Cambridge Nick Werstiuk February 25, 2007.
© 2007 Open Grid Forum JSDL-WG Session OGF22 – General Session (11:15-12:45) 25 February 2008 Boston, U.S.
© 2006 Open Grid Forum FEDSEC-CG Andrew Grimshaw and Jens Jensen.
© 2006 Open Grid Forum Activity Instance Schema Philipp Wieder (with the help of the JSDL-WG) Activity Instance Document Schema BoF Monday, 25 February,
© 2006 Open Grid Forum Network Services Interface OGF 33, Lyon Guy Roberts, Inder Monga, Tomohiro Kudoh 19 th Sept 2011.
© 2015 Open Grid Forum ETSI CSC activities Wolfgang Ziegler Area Director Applications, OGF Fraunhofer Institute SCAI Open Grid Forum 44, May 21-22, 2015.
© 2006 Open Grid Forum GridRPC Working Group 15 th Meeting GGF22, Cambridge, MA, USA, Feb
OGSA-RSS Face-to-Face Meeting Sunnyvale, CA, US Aug 15-16, 2005.
© 2006 Open Grid Forum Network Services Interface CS Errata Guy Roberts, Chin Guok, Tomohiro Kudoh 29 Sept 2015.
© 2006 Open Grid Forum OGSA-WG: EGA Reference Model GGF18 Sept. 12, 4-5:30pm, #159A-B.
© 2006 Open Grid Forum Grid High-Performance Networking Research Group (GHPN-RG) Dimitra Simeonidou
Peter Ziu Northrop Grumman ACS-WG Grid Provisioning Appliance Concept GGF13, March 14, 2005
© 2007 Open Grid Forum OGF Management Area Meeting OGF20 7 May, am-12:30pm Manchester, UK.
NAREGI PSE with ACS S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4 1 Utsunomiya University 2 National Institute of Informatics 3 FUJITSU Limited.
© 2006 Open Grid Forum Grid Resource Allocation Agreement Protocol GRAAP-WG working session 1 Thursday, 5 March, 2009 Catania, Sicily.
© 2006 Open Grid Forum VOMSPROC WG OGF36, Chicago, IL, US.
© 2007 Open Grid Forum OGF20 Levels of the Grid Workflow Interoperability OGSA-WG F2F meeting Adrian Toth University of Miskolc NIIF 11 th May, 2007.
© 2006 Open Grid Forum 1 Application Contents Service (ACS) ACS-WG#1 Monday, September 11 10:30 am - 12:00 am (158A-B) ACS-WG#2 Wednesday, September 13.
© 2008 Open Grid Forum Production Grid Infrastructure WG State Model Discussions PGI Team.
OGSA Data Architecture WG Data Transfer Session Allen Luniewski, IBM Dave Berry, NESC.
© 2007 Open Grid Forum JSDL-WG Session OGF26 – General Session 11:00-12:30, 28 May 2009 Chapel Hill, NC.
Grid High-Performance Networking Research Group (GHPN-RG) Franco Travostino Dimitra Simeonidou
Network Services Interface
ACS sample implementation
SLIDES TITLE Your name Session Name, OGSA-WG #nn
GGF Intellectual Property Policy
Welcome and Introduction
OGSA EMS Session OGF19 OGSA-WG session #3 30 January, :30pm
RISGE-RG use case template
GridRPC Working Group 13th Meeting
Grid Resource Allocation Agreement Protocol
Service Virtualization via a Network Appliance….
OGF session PMA, Florence, 31 Jan 2017.
WS-Agreement Working Session
Grid Scheduling Architecture – Research Group
Sharing Topology Information
Network Services Interface
Network Services Interface Working Group
OGSA-Workflow OGSA-WG.
Information Model, JSDL and XQuery: A proposed solution
Network Measurements Working Group
WS Naming OGF 19 - Friday Center, NC.
Activity Delegation Kick Off
SAGA: Java Language Binding
Network Services Interface Working Group
Introduction to OGF Standards
SAGA: Java Language Binding
Proposed JSDL Extension: Parameter Sweeps
UR 1.0 Experiences OGF 24, Singapore.
OGF 40 Grand BES/JSDL Andrew Grimshaw Genesis II/XSEDE
Presentation transcript:

Thank you for your kind introduction. My name is Hiroyuki Kanazawa. I will talking about ‘Problem Solving Environment based on Grid Services’ which called NAREGI-PSE. NAREGI-PSE: Problem Solving Environment with ACS based on Grid Services S.Kawata1), H.Usami2) , H.Kanazawa3)4), M.Yamada3), Y.Miyahara3), Y. Itou3), Y.Hayase5), S.Hwang2) 1)Utsunomiya University 2)National Institute of Informatics 3)FUJITSU Limited 4)Kanazawa University 5)Toyama National College of Maritime Technology

Intellectual Property Policy I acknowledge that participation in GGF16 is subject to the GGF Intellectual Property Policy. Intellectual Property Notices Note Well: All statements related to the activities of the GGF and addressed to the GGF are subject to all provisions of Section 17 of GFD-C.1 (.pdf), which grants to the GGF and its participants certain licenses and rights in such statements. Such statements include verbal statements in GGF meetings, as well as written and electronic communications made at any time or place, which are addressed to: the GGF plenary session, any GGF working group or portion thereof, the GFSG, or any member thereof on behalf of the GFSG, the GFAC, or any member thereof on behalf of the GFAC, any GGF mailing list, including any working group or research group list, or any other list functioning under GGF auspices, the GFD Editor or the GWD process Statements made outside of a GGF meeting, mailing list or other function, that are clearly not intended to be input to an GGF activity, group or function, are not subject to these provisions. Excerpt from Section 17 of GFD-C.1 Where the GFSG knows of rights, or claimed rights, the GGF secretariat shall attempt to obtain from the claimant of such rights, a written assurance that upon approval by the GFSG of the relevant GGF document(s), any party will be able to obtain the right to implement, use and distribute the technology or works when implementing, using or distributing technology based upon the specific specification(s) under openly specified, reasonable, non-discriminatory terms. The working group or research group proposing the use of the technology with respect to which the proprietary rights are claimed may assist the GGF secretariat in this effort. The results of this procedure shall not affect advancement of document, except that the GFSG may defer approval where a delay may facilitate the obtaining of such assurances. The results will, however, be recorded by the GGF Secretariat, and made available. The GFSG may also direct that a summary of the results be included in any GFD published containing the specification.   GGF Intellectual Property Policies are adapted from the IETF Intellectual Property Policies that support the Internet Standards Process.

Introduction NAREGI-PSE Conclusions Outline This is the outline of my talk. First, I will introduce NAREGI, the grid research and development project. Next, I will talk about NAREGI-PSE. NAREGI-PSE is a part of NAREGI project. And it is now being developed for first version of the software, The version is called ‘beta version’. It will be released in the spring of 2006. About NAREGI-PSE, its objectives, its usage or typical use case will be discussed. Then I will try to illustrate its mechanism, focusing structure of data. Introduction “NAREGI”, the grid research and development project NAREGI-PSE Use Case NAREGI-PSE and ACS Conclusions

Research Themes in NAREGI These are grid middleware Research themes of NAREGI in National Institute of Informatics. And these are also subgroups of NII. Subgroup is called “work package”, and the number is attached from 1 to 6. WP1 researches Lower… Such as Scheduler… WP1: Lower and Middle-Tier Middleware for Resource Management ( Scheduler, Broker, Auditing, Accounting, Grid Virtual Machine ) WP2: Grid Programming Middleware ( Grid RPC, Grid MPI ) WP3: User-Level Grid Tools, including PSE <= Gateway Services ( Workflow GUI, Visualization Tools, PSE ) WP4: Data Grid Environment ( Grid wide file system ) WP5: Networking, Security & User Management (Traffic Measurement, Optimal Routing Algorithms, Robust TCP/IP Protocols, Grid security infrastructure, Virtual Organization) WP6: Nanoscience Applications using grid middleware ( Parallel Structure, Granularity, Resource Requirement, Coupled Simulation )

High throughput network for research and education (SuperSINET) NAREGI Software Stack This slide shows the middleware stack of NAREGI, and each research group advances the research and development while closely cooperating. There is computing resources of each site most below. The computing resources is connected by high through put and nation wide network called SuperSINET. The project test bed of middleware is depend on this network. In addition, the target of the project is to develop the middleware that operates on this network. I will indicate the software components that is the close relation to PSE First, Grid Workflow is a visual tool for seamlessly preparing, submitting, and querying distributed jobs running on remote computing resources. It handles programs and data explicitly. Complex workflow descriptions such as loops and conditional branches are supported. Next, Super Scheduler, which is usually called SS, is a scheduling system for large-scale control and management of a wide variety of resources shared by different organizations in the grid environment. The feature of SS to be a dynamic resource allocation and the reservation based scheduling Next, Distributed Information Service, which is usually called IS, is a secure, scalable resource information management service. GatewayServices WP6: Grid-Enabled Nano-Applications WP3: Grid Visualization WP3: PSE WP2: Grid  Programming -Grid RPC -Grid MPI WP4: Data Grid Environment WP3: Grid Workflow WP1: Super Scheduler WP1: Distributed Information Service (OGSA) WP1: Grid VM WP5: High-Performance & Secure Grid Networking High throughput network for research and education (SuperSINET) NII IMS Research Organizations Universities Computing Resources

Scenario for Multi-site MPI Job Execution In this picture, One of the applications researched and developed by the NAREGI project showed the appearance that operated with the grid middleware. I think that it can be understood a lot of middleware is related to the complexity. RISM source FMO source Application requirement definition Work- flow … Multi-Physics   simulation (MPI). a: User Registration b: Deployment c: Edit Grid Workflow CA PSE 8: Visualization 1: Submission Information Service Super Scheduler 2: Resource discovery 3: Negotiation 5: IMPI starts Agreement 9: Accounting Co-Allocation GridVM GridVM GridVM 6: MPI job starts 4: Reservation Local Scheduler Local Scheduler Local Scheduler 7: MPI init. 2: Monitoring IMPI Server RISM Job FMO Job Solvent simulation GridMPI Solute simulation Site B (SMP machine) Site C (PC cluster) Site A Different (sub) Jobs

Provide a framework to distribute users’ applications on grid NAREGI-PSE Concept Provide a framework to distribute users’ applications on grid Users can register, deploy and retrieve applications by using NAREGI-PSE. Application developers distribute their applications to research communities without a hard task. Application users do not want to care about grid for using application. Focus on a legacy application Deploy application binaries for specific targets Compile source programs, if needed

Use Case I (Registration, Compilation and Deployment) I am talking about usage of NAREGI PSE. This figure shows application registration to PSE, and compile it, and then deploy application on each computer nodes. Registration Upload files (e.g., source code/executables, compile script, post-process script, initial input files, etc.) to the PSE application pool Upload information (e.g., description, system requirements, etc.) associated with the uploading application to the application pool Compilation (if needed) Select an application. And then select a server for compilation matching to resource requirement. PSE transfers necessary files (e.g., source code) from the application pool to the compile server PSE compiles and verifies them on the server PSE transfers files (e.g., executable) from the compile server to the application pool Deployment Select an application and servers that meet the system requirements for application deployment PSE transfers the executable in the application pool to the selected servers (Optional) Execute a post-process defined by user to configure and/or verify deployment on each servers PSE registers information on the deployed servers to the information service NAREGI-PSE Enter Application Info. Application Developer Application Pool = ACS(ApplicationRepository) +Application Information Upload files Source programs Compile script etc. Deployment Info. Application files Compile Info. Service Resource Info. Resource Info. Application Files(binaries) Server#1 Server#2 Server#3 Compile OK! Post process NG! Post process OK!

Application Deployment Information Distributed Information Service of NAREGI is based on Common Information Model, called CIM, which is specified by Distributed Management Task Force, Inc. (DMTF). When an application is deployed, NAREGI-PSE stores deployment information into Information Service.

Use Case II (Retrieval and Execution) Application Retrieval Retrieve application using GUI Import the information of selected application (system requirements - JSDL, etc.) from application pool to workflow icon of Grid Workflow Execution Compose a workflow job from the registered workflow icon PSE submits a job to Super-Scheduler Super-Scheduler dispatches resource referring the resource information provided by Information Service Grid Workflow NAREGI-PSE Application User Retrieve Application ACS + Application Info. System Requirements etc. Compose WF Application Info. WF execution request (job submission) Application Deployment Info. Info. Service Resource Info. Super Scheduler Dynamic Resource Info. Execute on the suitable resource Server#1 Server#2 Server#3 Deployed Not deployed Deployed Load: high Load: low Load: low

Application Pool with Application Contents Services (ACS) In this use case, Application Producer create application archives in a application repository using application repository interface. Then Application Consumer is getting application contents from application archives. ACS working group in Global Grid Forum had released ACS specification draft version one in last November. We have started developing PSE, based on working draft of ACS specification at August. We will modify our ACS implementation later to based on released draft document. ACS NAREGI-PSE stores application files to Application Repository. The application with different resource requirement is stored respectively as another application archive. NAREGI-PSE Application Pool = Application Information Including Resource Requirements (JSDL) + ACS (Application Repository) Application Producer Create, Read, Update, Delete Application Archive (AA) Signature AA Descriptor Application contents Application Repository Contents Application Pool Interface Application Consumer

Job Submission Description Languages (JSDL) Overview of Job Submission Description Languages specification version 1 is shown in left squre. Right table contains Job Submission Description Languages of NAREGI. The main differences are Application element and DataStaging element. Application element is extended to describe MPI parallel applications. DataStaging is not used, because Datastaging is described in Workflow modeling Languages in NAREGI. This slides shows Job Submission Description Languages in NAREGI. in left square, it shows overview of JSDL specification. JobIdentification element contains job name , description and so on. Application element contains Application name, argument, working directory and so on. Resources contain hardware resources such as CPU architecture. DataStaging element contain FileName, Source, Target and so on. Application element is extended: POSIXApplication MPIApplicationSpecific ApplicationResource CheckPointableApplication JSDL Overview in GGF drafts: <JobDefinition> <JobDescription> <JobIdentification ... /> <Application ... /> <Resources... /> <DataStaging ... /> </JobDescription> </JobDefinition> DataStaging element is not used. Described in Workflow.

Resource Requirements Resource requirements of applications in NAREGI-PSE are: Described based on Job Submission Description Language (JSDL). PSE refers to resource requirements of the applications: to determine what nodes/systems are used for compilation/deployment. Application users can copy JSDL in PSE to Grid Workflow. They can modify JSDL to match their specific purposes in the Grid Workflow.

Application Model of NAREGI-PSE Common application description initial data files source files (code, makefile, etc.) executables resource requirements (JSDL) configuration description With Source code executables resource requirements (JSDL) configuration description build description (shell script) … … … executables resource requirements (JSDL) configuration description Without source code (ex. executable binary is registered. Pre-installed application.) executables resource requirements (JSDL) configuration description Workflow Workflow (WFML) (WFML is abbreviation of Work Flow Modeling Language. WFML consists of JSDL of each application and relation of application.)

Application Contents Services (ACS) In this use case, Application Producer create application archives in a application repository using application repository interface. Then Application Consumer is getting application contents from application archives. ACS working group in Global Grid Forum had released ACS specification draft version one in last November. We have started developing PSE, based on working draft of ACS specification at August. We will modify our ACS implementation later to based on released draft document. ACS NAREGI-PSE stores application files to Application Repository. The application with different resource requirement is stored respectively as another application archive. Signature NAREGI-PSE Application Pool = Application Information Including Resource Requirements (JSDL) + ACS (Application Repository) AA Descriptor Application Archive (AA) Application contents ACS ( Application Repository ) ACS ( Application Repository ) includes Source + shell script, Binaries, Post script, WF(WorkFlow)

NAREGI-PSE and ACS ACS NAREGI-PSE stores application files into Application Repository (ACS-AA). The application with a different resource requirement is stored as another application archive. AA relation may be important to describe a relation between a source code and binaries AA Source code AA AA AA Binary Executable A Binary Executable B Binary Executable C

Beta version of NAREGI will be released at spring of 2006. Conclusions Beta version of NAREGI will be released at spring of 2006. NAREGI-PSE is implemented with ACS. Beta version of NAREGI-PSE enables users to register their own applications, to compile and deploy the applications on the grid, to retrieve the application information, to copy the application information to Grid Workflow for execution.