1 Project Leader, NAREGI Project Professor, National Institute of Informatics Kenichi Miura, Ph.D. Grid Activities in Japan - Overview of NAREGI Project.

Slides:



Advertisements
Similar presentations
March 6 th, 2009 OGF 25 Unicore 6 and IPv6 readiness and IPv6 readiness
Advertisements

May 9, 2007Astro-RG, OGF201 Japanese Virtual Observatory and NaReGi Masatoshi Ohishi / NAOJ, Sokendai & NII /, &
Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
Seminar Grid Computing ‘05 Hui Li Sep 19, Overview Brief Introduction Presentations Projects Remarks.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 NEGST Workshop, June 23-24, 2006 Outline of NAREGI Project: Current Status and Future Direction Kenichi Miura, Ph.D. Information Systems Architecture.
Massimo Cafaro GridLab Review GridLab WP10 Information Services Massimo Cafaro CACT/ISUFI University of Lecce, Italy.
DANSE Central Services Michael Aivazis Caltech NSF Review May 23, 2008.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
1 NEGST Workshop, May 28-29, 2007 Current Status of the NAREGI Project Kenichi Miura, Ph.D. Information Systems Architecture Research Division Centr for.
1 Science Grid Program NAREGI And Cyber Science Infrastructure November 1, 2007 Kenichi Miura, Ph.D. Information Systems Architecture Research Division.
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GINGIN Grid Interoperation on Data Movement.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Grid security in NAREGI project NAREGI the Japanese national science grid project is doing research and development of grid middleware to create e- Science.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
Grid ASP Portals and the Grid PSE Builder Satoshi Itoh GTRC, AIST 3rd Oct UK & Japan N+N Meeting Takeshi Nishikawa Naotaka Yamamoto Hiroshi Takemiya.
2006/1/23Yutaka Ishikawa, The University of Tokyo1 An Introduction of GridMPI Yutaka Ishikawa and Motohiko Matsuda University of Tokyo Grid Technology.
Grid security in NAREGI project July 19, 2006 National Institute of Informatics, Japan Shinichi Mineo APAN Grid-Middleware Workshop 2006.
The Japanese Virtual Observatory (JVO) Yuji Shirasaki National Astronomical Observatory of Japan.
NAREGI WP4 (Data Grid Environment) Hideo Matsuda Osaka University.
Kento Aida, Tokyo Institute of Technology Grid Challenge - programming competition on the Grid - Kento Aida Tokyo Institute of Technology 22nd APAN Meeting.
DISTRIBUTED COMPUTING
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
OGF 25/EGEE User Forum Catania, March 2 nd 2009 Meta Scheduling and Advanced Application Support on the Spanish NGI Enol Fernández del Castillo (IFCA-CSIC)
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Sub-Project Leader, NAREGI Project Visiting Professor, National Institute of Informatics Professor, GSIC, Tokyo Inst. Technology Satoshi Matsuoka National.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Introduction of NAREGI-CA National Institute of Informatics JAPAN Toshiyuki Kataoka, July 19, 2006 APAN Grid-Middleware Workshop, Singapore.
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Cracow Grid Workshop ‘06 17 October 2006 Execution Management and SLA Enforcement in Akogrimo Antonios Litke Antonios Litke, Kleopatra Konstanteli, Vassiliki.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
© 2008 Open Grid Forum File Catalog Development in Japan e-Science Project GFS-WG, OGF24 Singapore Hideo Matsuda Osaka University.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
7. Grid Computing Systems and Resource Management
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
The Roadmap of NAREGI Security Services Masataka Kanamori NAREGI WP
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
ACGT Architecture and Grid Infrastructure Juliusz Pukacki ‏ EGEE Conference Budapest, 4 October 2007.
Gang Chen, Institute of High Energy Physics Feb. 27, 2012, CHAIN workshop,Taipei Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
NAREGI PSE with ACS S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4 1 Utsunomiya University 2 National Institute of Informatics 3 FUJITSU Limited.
Bob Jones EGEE Technical Director
StoRM: a SRM solution for disk based storage systems
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
NAREGI-CA Development of NAREGI-CA NAREGI-CA Software CP/CPS Audit
Thank you for your kind introduction.
Grid Computing.
University of Technology
Presentation transcript:

1 Project Leader, NAREGI Project Professor, National Institute of Informatics Kenichi Miura, Ph.D. Grid Activities in Japan - Overview of NAREGI Project - February 16, 2006 CHEP2006 Mumbai, India

2 National Research Grid Initiative (NAREGI) Project:Overview - Started as an R&D project funded by MEXT (FY2003-FY2007) 2 B Yen(~17M$) budget in FY One of Japanese Government’s Grid Computing Projects ITBL, Visualization Grid, GTRC, BioGrid etc. - Collaboration of National Labs. Universities and Industry in the R&D activities (IT and Nano-science Apps.) - NAREGI Testbed Computer Resources (FY2003) MEXT:Ministry of Education, Culture, Sports,Science and Technology

3 (1) To develop a Grid Software System (R&D in Grid Middleware and Upper Layer) as the prototype of future Grid Infrastructure in scientific research in Japan (2) To provide a Testbed to prove that the High-end Grid Computing Environment (100+Tflop/s expected by 2007) can be practically utilized in the Nano-science Applications over the Super SINET. (3) To Participate in International Collaboration (U.S., Europe, Asian Pacific) (4) To Contribute to Standardization Activities, e.g., GGF National Research Grid Initiative (NAREGI) Project:Goals

4 Project Leader (Dr.K.Miura, NII) NAREGI Research Organization and Collaboration

5 Participating Organizations National Institute of Informatics (NII) (Center for Grid Research & Development) Institute for Molecular Science (IMS) (Computational Nano ‐ science Center) Universities and National Laboratories(Joint R&D) (AIST, Titech, Osaka-u, Kyushu-u, Kyushu Inst. Tech., Utsunomiya-u, etc.) Research Collaboration (ITBL Project, National Supecomputing Centers, KEK, NAO etc.) Participation from Industry (IT and chemical/Material etc.) Consortium for Promotion of Grid Applications in Industry

6 NAREGI Software Stack SuperSINET ( Globus,Condor,UNICORE  OGSA) Grid-Enabled Nano-Applications Grid PSE Grid Programming -Grid RPC -Grid MPI Grid Visualization Grid VM Distributed Information Service Grid Workflow Super Scheduler High-Performance & Secure Grid Networking Computing Resources NII IMS Research Organizations etc Data Grid Packaging

7 WP-1: Lower and Middle-Tier Middleware for Resource Management: Matsuoka (Titech), Kohno(ECU), Aida (Titech) WP-2: Grid Programming Middleware: Sekiguchi(AIST), Ishikawa(AIST) WP-3: User-Level Grid Tools & PSE: Usami (NII), Kawata(Utsunomiya-u) WP-4: Data Grid Environment Matsuda (Osaka-u) WP-5: Networking, Security & User Management Shimojo (Osaka-u), Oie ( Kyushu Tech.) , Imase ( Osaka U.) WP-6: Grid-enabling tools for Nanoscience Applications : Aoyagi (Kyushu-u) R&D in Grid Software and Networking Area (Work Packages)

8 Grid PSE -Deployment of applications on the Grid -Support for execution of deployed applications Grid Workflow -Workflow language independent of specific Grid middleware -GUI in task-flow representation Grid Visualization -Remote visualization of massive data distributed over the Grid -General Grid services for visualization WP-3: User-Level Grid Tools & PSE

9

10 Data 1 Data 2 Data n Grid-wide File System Metadata Management Data Access Management Data Resource Management Job 1 Meta- data Meta- data Data 1 Grid Workflow Data 2Data n WP-4 : NAREGI Data Grid β-1 Architecture Job 2Job n Meta- data Job 1 Grid-wide Data Sharing Service Job 2 Job n Data Grid Components Import data into workflow Place & register data on the Grid Assign metadata to data Store data into distributed file nodes

11 Collaboration in Data Grid Area High Energy Physics - KEK - EGEE Astronomy - National Astronomical Observatory (Virtual Observatory) Bio-informatics - BioGrid Project

12 WP-6:Adaptation of Nano-science Applications to Grid Environment RISM FMO Reference Interaction Site Model Fragment Molecular Orbital method Site A MPICH-G2, Globus RISM FMO Site B Grid MPI over SuperSINET Data Transformation between Different Meshes Electronic Structure in Solutions Electronic Structure Analysis Solvent Distribution Analysis Grid Middleware

13 MPI RISM Job Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM source FMO source Work- flow PSECA Site A Site B (SMP machine) Site C (PC cluster) 6: Co-Allocation 3: Negotiation Agreement 6: Submission 10: Accounting 10: Monitoring 4: Reservation 5: Sub-Job 3: Negotiation 1: Submission c: Edit b: Deployment a: Registration CA Resource Query GridVM Distributed Information Service GridMPI RISM SMP 64 CPUs FMO PC Cluster 128 CPUs Grid Visualization Output files Input files IMPI Scenario for Multi-sites MPI Job Execution

14 NAREGI α-Version Middleware (2004-5) 1. Resource Management ( WP1 ) Unicore/Globus/Condor as “Skeletons” OGSA-EMS Job Management, Brokering, Scheduling Coupled Simulation on Grid (Co-Allocation/Scheduling) WS(RF)-based Monitoring/Accounting Federation 2. Programming (WP2 ) High-Performance Standards-based GridMPI ( MPI-1/2, IMPI) Highly tuned for Grid environment (esp. collectives) Ninf-G: Scientific RPC for Grids 3. Applications Environment ( WP3 ) Application Discovery and Installation Management ( PSE ) Workflow Tool for coupled application interactions WSRF-based Large Scale Grid-based Visualization 4. Networking and Security ( WP5 ) Production-Grade CA Out-of-band Real-time network monitor/control 5 . Grid-enabling Nano Simulations ( WP6) Framework for coupled simulations First OGSA-EMS based implementation In the world WSRF-based Terabyte interactive visualization Drop-in Replacement for SimpleCA Large scale coupled simluation of proteins in a solvent on a Grid GridRPC GGF Programming Standard

15 Highlights of NAREGI Beta ( ) GT4/WSRF based “full” OGSA-EMS incarnation –OGSA-EMS/RSS WSRF components --- no legacy (pre- WS)Unicore/Globus2 dependencies –WS-Agreement brokering and co-allocation –JSDL-based job submission to WS-GRAM –Support for more OSes (AIX, Solaris, etc.) and BQs Sophisticated VO support for identity/security/monitoring/accounting (extensions of VOMS/MyProxy, WS-* adoption) WS- Application Deployment Support Grid-wide file system (GFarm) and other data management tools Complex workflow for various coupled simulations Overall stability/speed/functional improvements for real deployment

16 FY 2003 FY 2004 FY 2005 FY 2006 FY 2007 UNICORE -based R&D Framework OGSA /WSRF-based R&D Framework Roadmap of NAREGI Grid Middleware Utilization of NAREGI NII-IMS Testbed Utilization of NAREGI-Wide Area Testbed Prototyping NAREGI Middleware Components Development and Integration of αVer. Middleware Development and Integration of βVer. Middleware Evaluation on NAREGI Wide-area Testbed Development of OGSA-based Middleware Verification & Evaluation Of Ver. 1 Apply Component Technologies to Nano Apps and Evaluation Evaluation of αVer. In NII-IMS Testbed Evaluation of βRelease By IMS and other Collaborating Institutes Deployment of βVer. αVer. (Internal) βVer. Release Version 1 . 0 Release Midpoint Evaluation

17 SINET (44nodes)100 Mbps~ 1Gbps Super SINET (32nodes) 10Gbps International lines Japan – U.S.A 10Gbps(To:N.Y.) 2.4Gbps(To:L.A) Japan - Singapore 622Mbps Janap – Hong Kong 622Mbps ( Line speeds ) NationalPublicPrivateJunior Colleges Specialized Training Colleges Inter-Univ. Res. Inst. Corp. OthersTotal Number of SINET particular Organizations ( ) Network Topology Map of SINET/SuperSINET(Feb. 2006)

18 ~ 3000 CPUs ~ 17 Tflops Center for GRID R&D (NII) ~ 5 Tflops Computational Nano-science Center(IMS) ~ 10 Tflops Osaka Univ. BioGrid TiTech Campus Grid AIST SuperCluster ISSP Small Test App Clusters Kyoto Univ. Small Test App Clusters Tohoku Univ. Small Test App Clusters KEK Small Test App Clusters Kyushu Univ. Small Test App Clusters AIST Small Test App Clusters NAREGI Phase 1 Testbed Super-SINET (10Gbps MPLS)

19 Computer System for Grid Software Infrastructure R & D Center for Grid Research and Development (5T flops , 700GB) File Server (PRIMEPOWER 900 + ETERNUS3000 + ETERNUS LT160 ) (SPARC64V1.3GHz) (SPARC64V1.3GHz) 1node 1node / 8CPU SMP type Compute Server (PRIMEPOWER HPC2500 ) 1node (UNIX, SPARC64V1.3GHz/64CPU) SMP type Compute Server (SGI Altix3700 ) 1node (Itanium2 1.3GHz/32CPU) SMP type Compute Server (IBM pSeries690 ) 1node (Power4 1.3GHz /32CPU) InfiniBand 4X (8Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx ) GbE (1Gbps) Distributed-memory type Compute Server(Express 5800 ) GbE (1Gbps) SuperSINET High Perf. Distributed-memory Type Compute Server (PRIMERGY RX200 ) High Perf. Distributed-memory type Compute Server (PRIMERGY RX200 ) Memory 130GB Storage 9.4TB Memory 130GB Storage 9.4TB 128CPUs (Xeon, 3.06GHz) +Control Node Memory 65GB Storage 9.4TB Memory 65GB Storage 9.4TB 128 CPUs (Xeon, 3.06GHz) +Control Node Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB 128 CPUs (Xeon, 2.8GHz) +Control Node Distributed-memory type Compute Server (Express 5800 ) 128 CPUs (Xeon, 2.8GHz) +Control Node Distributed-memory type Compute Server(HPC LinuxNetworx ) Ext. NW Intra NW-A Intra NW Intra NW-B L3 SW 1Gbps(upgradable To 10Gbps) L3 SW 1Gbps(Upgradable to 10Gbps) Memory 16GB Storage 10TB Back-up Max.36.4TB Memory 16GB Storage 10TB Back-up Max.36.4TB Memory 128GB Storage 441GB Memory 128GB Storage 441GB Memory 32GB Storage 180GB Memory 32GB Storage 180GB Memory 64GB Storage 480GB Memory 64GB Storage 480GB

20 SMP type Computer Server Memory 3072GB Storage 2.2TB Memory 3072GB Storage 2.2TB Distributed-memory type Computer Server(4 units) 818 CPUs(Xeon, 3.06GHz)+Control Nodes Myrinet2000 (2Gbps) Memory 1.6TB Storage 1.1TB/unit Memory 1.6TB Storage 1.1TB/unit File Server (SPARC64 GP, 675MHz) 16CPUs (SPARC64 GP, 675MHz) Memory 8GB Storage 30TB Back-up 25TB Memory 8GB Storage 30TB Back-up 25TB 5.4 TFLOPS 5.0 TFLOPS 16ways×50nodes (POWER4+ 1.7GHz) Multi-stage Crossbar Network L3 SW 1Gbps (Upgradable to 10Gbps) VPN Firewall CA/RA Server SuperSINET Center for Grid R & D Front-end Server Computer System for Nano Application R & D Computational Nano science Center ( 10 T flops , 5TB) Front-end Server

21 Industry/Societal Feedback International Infrastructural Collaboration ● Restructuring Univ. IT Research Resources Extensive On-Line Publications of Results Management Body / Education & Training Deployment of NAREGI Middleware Virtual Labs Live Collaborations Cyber-Science Infrastructure for R & D UPKI: National Research PKI Infrastructure ★ ★ ★ ★ ★ ★ ★ ☆ SuperSINET and Beyond: Lambda-based Academic Networking Backbone Cyber-Science Infrastructure ( CSI) Hokkaido-U Tohoku-U Tokyo-U NIINII Nagoya-U Kyoto-U Osaka-U Kyushu-U ( Titech, Waseda-U, KEK, etc.) NAREGI Outputs GeNii (Global Environment for Networked Intellectual Information) NII-REO (Repository of Electronic Journals and Online Publications

22 Cyber Science Infrastructure toward Petascale Computing (planned ) Cyber-Science Infrastructure (CSI) ( IT Infra. for Academic Research and Education) Operation/ Maintenance ( Middleware ) Networking Contents NII Collaborative Operation Center Delivyer Delivery Networking Infrastructure (Super-SINET ) Univ./National Supercomputing VO Domain Specific VO (e.g ITBL) Feedback R&D Collaboration Operaontional Collaborati Middleware CA NAREGI Site Research Dev. ( ) βver. V1.0 V2.0 International Collaboration - EGEE - UNIGRIDS -Teragrid -GGF etc. Feedback Delivery Project- Oriented VO Delivery Domain Specific VOs Customization Operation/Maintenance ナノ分野 実証・評価 分子研 ナノ分野 実証・評価 分子研 Nano Proof of al.Concept Eval. IMS Nano Proof, Eval. IMS Joint Project ( Bio ) Osaka-U Joint Project AIST R&D Collaboration Industrial Projects Project-oriented VO Note: names of VO are tentative ) Peta-scale System VO Core Site R&D Collaboration Operation/ Maintenance ( UPKI,CA ) IMS,AIST,KEK,NAO,etc.

23 Summary In the NAREGI project, seamless federation of heterogeneous resources is the primary objective Computations in Nano-science/technology applications over Grid is to be promoted, including participation from industry. Data Grid features has been added to NAREGI since FY’05. NAREGI Grid Middleware is to be adopted as one of the important components in the new Japanese Cyber Science Infrastructure Framework. NAREGI Project will provide the VO capabilities for the National Peta-scale Computing Project International Co-operation is essential.

24 Additional Slides

25 Middleware for Science Grid Environment Use in Industry ( New Intellectual Product Development) Vitalization of Industry Progress in the Latest Research and Development (Nano, Biotechnology) Cyber Science Infrastructure Summary of NAREGI Project Grid Middleware for Large Computer Centers Personnel Training (IT and Application Engineers) Productization of General-purpose Grid Middleware for Scientific Computing Contribution to International Scientific Community and Standardization MEXT: Ministry of Education, Culture, Sports, Science and Technology NAREGI V1.0 SuperSINET ( Globus,Condor,UNICORE  OGSA) Grid-Enabled Nano-Applications Grid PSE Grid Programming -Grid RPC -Grid MPI Grid Visualization Grid VM Distributed Information Service Grid Workflow Super Scheduler High-Performance & Secure Grid Networking Computing Resources NII IMS Research Organizations etc Data Grid Packaging

26 Mid-range Project Target(βVersion) –R&D on scalable middleware infrastructure for server grids and metacomputing resource management, deployable at large computing centers, based on Unicore, Globus and Condor Final Project Target (Version 1) –R&D on building resource management framework and middleware for VO hosting by the centers, based on the OGSA standard –Distribute result as high-quality open source software NAREGI Middleware Objectives

27 (Scalable) Super/Meta Scheduler –Schedule large metacomputing jobs –“Scalable”, Agreement-based scheduling –Assume preemptive metascheduled jobs (Scalable) Grid Information Service –Support multiple monitoring systems –User and job auditing, accounting –CIM-based node information schema GridVM (Lightweight Grid Virtual Machine) –Metacomputing Support –Enforcing Authorization Policies, Sandbox –Checkpointing/FT/Preemption Authentication Service –Along with GGF defined assurance level –Authentication mechanism across policy domains Features of Grid Middleware(WP-1)

28 UNICORE-CONDOR Linkage EU GRIP Globus Universe Condor-G Unicore-C Condor-U

29 UNICONDORE Architecture :開発部 分 Condor-G Grid Manager Condor-G Grid Manager Gahp Server for UNICORE Gahp Server for UNICORE Condor UNICORE Condor-G condor_submit condor_q condor_rm ・・・ Gahp Protocol for UNICORE Condor commands Condor-U UNICORE(client) UNICOR E NJS UNICOR E NJS UNICORE TSI/Condor UNICORE TSI/Condor UNICORE IDB/Condor UNICORE IDB/Condor UNICOREpro Client Condor UNICORE-C UNICOR E Gateway UNICOR E Gateway UNICORE(server)

30 WP-2:Grid Programming – GridRPC/Ninf-G2 (AIST/GTRC) GridRPC Server sideClient side Client GRAM 3. invoke Executable 4. connect back Numerical Library IDL Compiler Remote Executable 1. interface request 2. interface reply fork MDS Interface Information LDIF File retrieve IDL FILE generate Programming Model using RPC on the Grid High-level, taylored for Scientific Computing (c.f. SOAP-RPC) GridRPC API standardization by GGF GridRPC WG Ninf-G Version 2 A reference implementation of GridRPC API Implemented on top of Globus Toolkit 2.0 (3.0 experimental) Provides C and Java APIs

31 ■ GridMPI is a library which enables MPI communication between parallel systems in the grid environment. This realizes; ・ Huge data size jobs which cannot be executed in a single cluster system ・ Multi-Physics jobs in the heterogeneous CPU architecture environment ① Interoperability: - IMPI ( Interoperable MPI ) compliance communication protocol - Strict adherence to MPI standard in implementation ② High performance: - Simple implementation - Buit-in wrapper to vendor-provided MPI library Cluster A: YAMPIIIMPI YAMPII IMPI server Cluster B: WP-2:Grid Programming -GridMPI (AIST and U-Tokyo)

32 Grid PSE: Problem Solving Environment (PSE) PSE automatically selects appropriate machines on the grid to execute application program before executing program.

33 Web-based GUI Icon: program or data Line: execution order Middleware independent (The GUI does not use UNICORE features.) Grid Workflow:

34 Development of Meta-applications 1) Registering components 2) Connecting components 3)Monitoring execution Registering resources of components needed in meta-applications eg. Program, data, target computer,.. Specifying dependency between components Monitoring execution of each component

35 Visualization of Distributed Massive Data Grid Visualization: Site B File Server Computation Server Site A File Server Computation Server Animation File Compressed Image Data 1 Computation Results Image-Based Remote Visualization Server-side parallel visualization Image transfer & composition Visualization Client Service Interface Parallel Visualization Module Parallel Visualization Module Service Interface Advantages No need of rich CPUs, memories, or storages for clients No need to transfer large amounts of simulation data No need to merge distributed massive simulation data Light network load Visualization views on PCs anywhere Visualization in high resolution without constraint Computation Results

36 Grid Visualization: Intermediate and final computed results on the distributed machines can be displayed by Grid Visualization System.

37 Another Example of Visualization Electroni c Structure of Lysozym e ■ Electronic Structure of Lysozyme in Solutions

38 Data 1 Data 2Data n Gfarm 1.2 PL4 (Grid FS) Data Access Management NAREGI WP4: Standards Employed in the Architecture Job 1 Data Specific Metadata DB Data 1Data n Job 1Job n Import data into workflow Job 2 Computational Nodes Filesystem Nodes Job n Data Resource Information DB OGSA-DAI WSRF2.0 OGSA-DAI WSRF2.0 Globus Toolkit Tomcat PostgreSQL 8.0 Workflow (NAREGI WFML =>BPEL+JSDL) Super Scheduler (SS) (OGSA-RSS) Data Staging Place data on the Grid Data Resource Management Metadata Construction OGSA-RSS FTS SC GridFTP GGF-SRM (beta2)

39 NAREGI WP4 Standards GGF Standards we help set within a WG –Grid FileSystems WG (discussion about functionality and usecase scenario) GGF and related Standards we employ –OGSA-DAI –OGSA-RSS –GridFTP –WSRF 2.0 –JSDL –SRM (planned for beta 2) Other industry standards we employ –BPEL Other de-facto “standards” we employ –Globus 4 –Tomcat (and associated WS/XML standards)

40 Roadmaps, Future Plans Extension of Data Sharing Service based on Grid FileSystem –VO support based on VOMS and/or XACML VO group permissions –Shared StorageResource Reservation for Work- Resource Mapping (OGSA-RSS) Data Transfer Service –Stream-like Data Transfer for huge amount of data (OGSA ByteIO?) Virtualization and/or Integration of Metadata –For Data Exchange among different storage/file- systems (SRM, SRB, …) –Logical Namespace Service (OGSA RNS, WS-Name)

41 WP-5: Network Measurement, Management & Control for Grid Environment Traffic measurement on SuperSINET Optimal QoS Routing based on user policies and network measurements Robust TCP/IP Control for Grids Grid CA/User Grid Account Management and Deployment High-speed managed networks Measurement Entity Network Control Entity Grid Network Management Server Network Information DB User Policy Information DB Super-scheduler Grid Application Multi-Points real-time measurement Dynamic bandwidth Control and QoS routing

42 WP-5:NAREGI-CA Features Compliance with the basic security level of GGF –Independent Registration Authority (RA) Server –Practical CP/CPS Template License ID management –Transfer authentication responsibility to Local RA Dual interfaces for certificate request –Web & command line enrollment Grid operation extensions –Batch issue of certificates by command lines –Assistance of Grid-mapfile & UUDB creation Extension in Version 2.0 –XKMS interface to realize Web services –Management scheme for NAREGI PMA NAREGI is operating NAREGI CA, which was approved as Production CA by the Asia Pacific Grid Policy Management Authority.

43 全energy計算 構造 update energy 勾配計算 dimer 計算 dimer 計算 dimer 計算 energy 勾配計算 energy 勾配計算 monomer 計算 初期化 monomer 計算 monomer 計算 GAMESS (FragmentMolecular Orbital 可視化情報 total energy structure update energy gradient dimer convergence energy gradient monomer initialization monomer ) 可視化情報 visualization 可視化情報 visualization 可視化情報 visualization Computational procedure can be defined by Grid Workflow. Grid Workflow:

44 Nano-science and Technology Applications Targeted Participating Organizations: -Institute for Molecular Science -Institute for Solid State Physics -AIST -Tohoku University -Kyoto University -Industry (Materials, Nano-scale Devices) -Consortium for Promotion of Grid Applications in Industry Research Topics and Groups: -Functional nano-molecules(CNT,Fullerene etc.) -Nano-molecule Assembly(Bio-molecules etc.) -Magnetic Properties -Electronic Structure -Molecular System Design -Nano-simulation Software Integration System -Etc.

45 Functional Nano-Molecules Orbiton (Orbital Wave) Manganese-Oxide Ferromagnetic Half-metal Memory Device Apps:Half-metal Magnetic Tunneling Device Nano-Molecular Assembly Nano-Electronic System Protein Folding Molecular Isomerizaton by Low Energy Integration Examples of Nano-Applications Research (1)

46 Examples of Nano-Applications Research (2) Magnetic Nano-dots Controling Arrangement of Nano-dots by Self Organization VgVg V Si Au quantum-dot ( Metal,organic Molecules ) Nano-device, Quantum Transport quantum wire Nano-Magnetism Nano-system Design

47 NII Center for Grid R&D (Jimbo-cho, Tokyo) Imperial Palace Tokyo St. Akihabara Mitsui Office Bldg. 14 th Floor 700m 2 office space (100m 2 machine room)