CCGrid 2006, 5/19//2006 The PRAGMA Testbed Building a Multi-Application International Grid San Diego Supercomputer Center / University of California, San.

Slides:



Advertisements
Similar presentations
PRAGMA – TeraGrid – AIST Interoperation Testing Philip Papadopoulos.
Advertisements

Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
Kento Aida, Tokyo Institute of Technology Grid Working Group Meeting Aug. 27 th, 2003 Tokyo Institute of Technology Kento Aida.
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
OSG All Hands, 3/5/2007 Cindy Zheng Peter Arzberger Philip Papadopoulos Mason Katz P acific R im A pplication and G rid M iddleware A ssembly University.
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Multi-organisation Grid Accounting System (MOGAS): PRAGMA deployment update A/Prof. Bu-Sung Lee, Francis School of Computer Engineering, Nanyang Technological.
PRAGMA 17 (10/29/2009) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Reports from Resource Breakout PRAGMA 16 KISTI, Korea.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Steering Committee Meeting Summary PRAGMA 18 4 March 2010.
Reports from Resource Breakout PRAGMA 15 USM, Malaysia.
PRAGMA 15 (10/24/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan COMPLAINTS TO RESOURCE GROUP Habibah A Wahab, Suhaini Ahmad, Nur Hanani Che Mat School of Pharmaceutical.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Resource WG PRAGMA Mason Katz, Yoshio Tanaka, Cindy Zheng.
Cindy Zheng, PRAGMA 8, Singapore, 5/3-4/2005 Status of PRAGMA Grid Testbed & Routine-basis Experiments Cindy Zheng Pacific Rim Application and Grid Middleware.
Cindy Zheng, Pragma Grid, 5/30/2006 The PRAGMA Testbed Building a Multi-Application International Grid Cindy Zheng P acific R im A pplication and G rid.
Resource/data WG Summary Yoshio Tanaka Mason Katz.
2 nd APGrid PMA F2F Meeting Osaka University Convention Center October 15 09: :20 # Participants: 26.
Cindy Zheng, PRAGMA 11, 10/16/2006 Resources Group P acific R im A pplication and G rid M iddleware A ssembly
National Institute of Advanced Industrial Science and Technology Running flexible, robust and scalable grid application: Hybrid QM/MD Simulation Hiroshi.
Resource WG Report. Projects Applications EOL Ninf-G Climate model GridBlast GOC Gangla / SCMSWeb => Uniform Database Goodness Status map (e.g. IVDGL)
PRAGMA Institute, 9/26/2007 Cindy Zheng, David Abramson, Peter Arzberger, Shahaan Ayyub, Colin Enticott, Slavisa Garic, Wojtek Goscinski, Mason J. Katz,
Kejun Dong, Kai Nan CNIC/CAS CNIC Resources And Activities Update Resources Working Group PRAGMA11 Workshop, Oct.16/17 Osaka, Japan.
National Institute of Advanced Industrial Science and Technology Advance Reservation-based Grid Co-allocation System Atsuko Takefusa, Hidemoto Nakada,
CSF4 Meta-Scheduler PRAGMA13 Zhaohui Ding or College of Computer.
A Proposal of Capacity and Performance Assured Storage in The PRAGMA Grid Testbed Yusuke Tanimura 1) Hidetaka Koie 1,2) Tomohiro Kudoh 1) Isao Kojima 1)
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
GIN Testbed Status 5/11/2006 Peter Arzberger, Cindy Zheng
Cindy Zheng, SC2006, 11/12/2006 Cindy Zheng PRAGMA Grid Testbed Coordinator P acific R im A pplication and G rid M iddleware A ssembly San Diego Supercomputer.
ACOMP, 3/15/2007 Cindy Zheng Peter Arzberger Philip Papadopoulos Mason Katz P acific R im A pplication and G rid M iddleware A ssembly University of California,
GRID COMPUTING AND SOME RESEARCH ISSUES IN DEVELOPMENT GEOGRID AT VAST Dao Van Tuyet Department for Computational & Knowledge Engineering.
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU) Climate Simulation on ApGrid/TeraGrid at SC2003 Client (AIST) Ninf-G Severs.
Three types of remote process invocation
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
Cindy Zheng, PRAGMA9, 10/21/2005 PRAGMA Grid Testbed & Routine-basis Experiments May – October 2005 Cindy Zheng Pacific Rim Application and Grid Middleware.
PRAGMA: Cyberinfrastructure, Applications, People Yoshio Tanaka (AIST, Japan) Peter Arzberger (UCSD, USA)
Status of PRAGMA Activities at KISTI Jongbae Moon 1.
Resource WG PRAGMA 18 Mason Katz, Yoshio Tanaka.
Cindy Zheng, Geon Workshop, 7/20/2006 PRAGMA Grid A Multi-Application Route-Use Global Grid Cindy Zheng PRAGMA Grid Coordinator P acific R im A pplication.
National Institute of Advanced Industrial Science and Technology Introduction of PRAGMA routine-basis experiments Yoshio Tanaka
Building the PRAGMA Grid Through Routine-basis Experiments Cindy Zheng Pacific Rim Application and Grid Middleware Assembly San Diego Supercomputer Center.
1 The state of Grid computing in Vietnam, and which aims the VNGrid Project wants to reach Dr. Lang Van, Tran HCM City Institute of Information Technology.
Building the PRAGMA Grid Through Routine-basis Experiments Cindy Zheng, SDSC, USA Yusuke Tanimura, AIST, Japan Pacific Rim Application Grid Middleware.
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
Routine-Basis Experiments in PRAGMA Grid Testbed Yusuke Tanimura Grid Technology Research Center National Institute of AIST.
Building the PRAGMA Grid Through Routine-basis Experiments Cindy Zheng, SDSC, USA Yusuke Tanimura, AIST, Japan Pacific Rim Application Grid Middleware.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
National Institute of Advanced Industrial Science and Technology APGrid PMA: Stauts Yoshio Tanaka Grid Technology Research Center,
Pacific Rim Application and Grid Middleware Assembly: PRAGMA A community building collaborations and advancing grid-based applications Peter Arzberger,
SC2008 (11/19/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Reports.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
National Institute of Advanced Industrial Science and Technology Developing Scientific Applications Using Standard Grid Middleware Hiroshi Takemiya Grid.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Millions of Jobs or a few good solutions …. David Abramson Monash University MeSsAGE Lab X.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
A System for Monitoring and Management of Computational Grids Warren Smith Computer Sciences Corporation NASA Ames Research Center.
Presentation transcript:

CCGrid 2006, 5/19//2006 The PRAGMA Testbed Building a Multi-Application International Grid San Diego Supercomputer Center / University of California, San Diego, USA Cindy Zheng, Peter Arzberger, Mason J. Katz, Phil M. Papadopoulos Monash University, Australia David Abramson, Shahaan Ayyub, Colin Enticott, Slavisa Garic National Institute of Advanced Industrial Science and Technology, Japan Yoshio Tanaka, Yusuke Tanimura, Osamu Tatebe Kasetsart University, Thailand Putchong Uthayopas, Sugree Phatanapherom, Somsak Sriprayoonsakul Nanyang Technological University, Singapore Bu Sung Lee Korea Instritute and Science and Technology Information, Korea Jae-Hyuck Kwak P acific R im A pplication and G rid M iddleware A ssembly

CCGrid 2006, 5/19//2006 PRAGMA and Testbed PRAGMA ( ) –Open international organization –Grid applications, practical issues –Build international scientific collaborations Resources working group –middleware interoperability –Global grid usability and productivity Routine-use experiments and testbed ( ) –Grass-roots, PRAGMA membership not necessary, work, long term –Multiple real science applications run on routine-basis TDDFT, Savannah, QM-MD, iGAP, Gamess-APBS, Siesta, Amber, FMO, HPM (GEON, Sensor, … ) Ninf-G, Nimrod/G, Mpich-Gx, Gfarm, SCMSWeb, MOGAS –Issues, solutions, collaborations, interoperate

CCGrid 2006, 5/19//2006 Grid interoperation Now (GIN) PRAGMA, TeraGrid, EGEE, … Applications/Middleware 1.TDDFT/Ninf-G Lessons Learned –Software interoperability –Authentication –Community Software Area –Cross-Grid monitoring

CCGrid 2006, 5/19//2006 PRAGMA Grid Testbed AIST, Japan CNIC, China KISTI, Korea ASCC, Taiwan NCHC, Taiwan UoHyd, India MU, Australia BII, Singapore KU, Thailand USM, Malaysia NCSA, USA SDSC, USA CICESE, Mexico UNAM, Mexico UChile, Chile TITECH, Japan QUT, Australia UZurich, Switzerland JLU, China NGO, Singapore MIMOS, Malaysia OSAKAU, Japan IOIT-HCM, Vietnam 5 continents, 14 countries, 25 organizations, 28 clusters

CCGrid 2006, 5/19//2006 Lessons Learned Heterogeneity –fundings, policies, environments Motivation –learn, develop, test, interop Communication – , VTC, Skype, workshop, timezone, language Create operation procedures –joining testbed –running applications –resources, contacts, instructions, monitoring, etc.

CCGrid 2006, 5/19//2006 Software Layers and Trust Trust all sites CAs Experimental -> production Grid Interoperation Now APGRID PMA, IGTF (5 accr.) PRAGMA CA Community Software Area

CCGrid 2006, 5/19//2006 Application Middleware Ninf-G –Support GridRPC model which will be a GGF standard –Integrated to NMI release 8 (first non-US software in NMI) –Ninf roll for Rocks 4.x is also available –On PRAGMA testbed, TDDFT and QM/MD application achieved long time executions (1 week ~ 50 days runs). Nimrod –Supports large scale parameter sweeps on Grid infrastructure Study the behaviour of some of the output variables against a range of different input scenarios. Computer parameters that optimize model output Computations are uncoupled (file transfer) Allows robust analysis and more realistic simulations Very wide range of applications from quantum chemistry to public health policy –Climate experiment ran some 90 different scenarios of 6 weeks each

CCGrid 2006, 5/19//2006 Server Client Compuer Func. Handle Client Component Info. Manager Remote Executables GridRPC: A Programming Model based on RPC GridRPC API is a proposed recommendation at the GGF Three components Information Manager - Manages and provides interface info Client Component - Manages remote executables via function handles Remote Executables - Dynamically generated on remote servers Built on top of Globus Toolkit (MDS, GRAM, GSI) Simple and easy-to-use programming interface Hiding complicated mechanism of the grid Providing RPC semantics

CCGrid 2006, 5/19//2006 Nimrod Development Cycle Prepare Jobs using Portal Jobs Scheduled Executed Dynamically Results displayed & interpreted Sent to available machines

CCGrid 2006, 5/19//2006 Fault-Tolerance Enhanced Ninf-G monitors each RPC call –Return error code for failures Explicit Faults : Server down, Disconnection of network Implicit Faults : Jobs not activated, unknown faults Timeout - grpc_wait*() –Retry/restart Nimrod/G monitors remote services and restarts failed jobs –Long jobs are split into many sequentially dependent jobs which can be restarted using sequential parameters called seqameters Improvement in the routine-basis experiment –developers test code on heterogeneous global grid –results guide developers to improve detection and handle faults

CCGrid 2006, 5/19//2006 Application Setup and Resource Management Heterogeneous platforms –Manual build, deploy applications, manage resources Labor intensive, time consuming, tidious Middleware solutions –For deployment Automatic distribution of executables use staging functions –For resource management Ninf-G client configuration allow description of server attributes –Port number of the Globus gatekeeper –Local scheduler type –Queue name for submitting jobs –Protocol for data transfer –Library path for dynamic linking Nimrod/G portal allows a user to generate a testbed and helps maintain information about resources, including use of different certificates.

CCGrid 2006, 5/19//2006 Gfarm in PRAGMA Testbed High performance Grid file system that federates file systems in multiple cluster nodes –SDSC (US) 60GB (10 I/O nodes, local disk) –NCSA (US) 1444GB (13 I/O nodes, NFS) –AIST (Japan) 1512GB (28 I/O nodes, local disk) –KISTI (Korea) 570GB (15 I/O nodes, local disk) –SINICA (Taiwan) 189GB (3 I/O nodes, local disk) –NCHC (Taiwan) 11GB (1 I/O node, local disk) Total : 3786 GBytes, 1527 MB/sec (70 I/O nodes)

CCGrid 2006, 5/19//2006 Application Benefit No modification required –Existing legacy application can access files in Gfarm file system without any modification Easy application deployment –Install Application in Gfarm file system, run everywhere It supports binary execution and shared library loading Different kinds of binaries can be stored at the same pathname, which will be automatically selected depending on client architecture Fault tolerance –Automatic selection of file replicas in access time tolerates disk and network failure File sharing – Community Software Area

CCGrid 2006, 5/19//2006 Performance Enhancements OriginalImproved metadata management W/ metadata cache server Performance for small files – Improve meta-cache management – add meta-cache server Directory listing of 16,393 files

CCGrid 2006, 5/19//2006 SCMSWeb Web-based monitoring system for clusters and grid –System usage –Performance metrics Reliability –Grid service monitoring –Spot problems at a glance

CCGrid 2006, 5/19//2006 PRAGMA-Driven Development Heterogeneity –Add platform support Solaris (CICESE, Mexico) IA64 (CNIC, China) Software deployment –NPACI Rocks Roll Support ROCKS – 4.1 –Native Linux RPM for various Linux platform Enhancement –Hierarchical monitoring on large scale Grid –Compress data exchange between Grid side For some site with slow network –Better and cleaner graphics user interfaces Standardize & more collaboration –GRMAP (Grid Resource Management & Account Project) – Collaboration between NTU and TNGC –GIN (Grid Interoperation Now) Monitoring – standardize data exchange between monitoring softwares

CCGrid 2006, 5/19//2006 Multi-organisation Grid Accounting System

CCGrid 2006, 5/19//2006 Information for grid resource managers/administrators: –Resource usage based on organization –Daily, weekly, monthly, yearly records –Resource usage based on project/individual/organisation –Individual log of jobs –Metering and charging tool, can decide a pricing system, e.g. Price = f(hardware specifications, software license, usage measurement) MOGAS Web information

CCGrid 2006, 5/19//2006 PRAGMA MOGAS status (27/3/2006) AIST, Japan CNIC, China KISTI, Korea ASCC, Taiwan NCHC, Taiwan UoHyd, India MU, Australia BII, Singapore KU, Thailand USM, Malaysia NCSA, USA SDSC, USA CICESE, Mexico UNAM, Mexico UChile, Chile TITECH, Japan Cindy Zheng, GGF13, 3/14/05 modified by A/Prof. Bu-Sung Lee MIMOS IOIT-HCM GT4 GT2 NGO, Singapore QUT

CCGrid 2006, 5/19//2006 Thank You Pointers PRAGMA: PRAGMA Testbed: PRAGMA: Example of Grass-Roots Grid Promoting Collaborative e-science Teams. CTWatch. Vol 2, No. 1 Feb 2006 The PRAGMA testbed – Building a Multi- application International Grid, CCGrid2006 Deploying Scientific Applications to the PRAGMA Grid Testbed: Strategies and Lessons, CCGrid2006 MOGAS: Analysis of Job in a Multi- Organizational Grid Test-bed, CCGrid2006

CCGrid 2006, 5/19//2006 Q & A PRAGMA testbed – Cindy Zheng Middleware: Ninf-G – Yoshio Tanaka Grid file system: Gfarm – Osamu Tatebe Grid monitoring: SCMSWeb – Somsak Sriprayoonsakul Grid accounting: MOGAS – Francis Lee