PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.

Slides:



Advertisements
Similar presentations
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
PRAGMA 17 (10/29/2009) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Reports from Resource Breakout PRAGMA 16 KISTI, Korea.
Resources WG Report Back. Account Creation Complaint –Too difficult to obtain user account on all resources Observations –Just ask Cindy and she will.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
PRAGMA Institute on PRAGMA 19 Wilfred W. Li, Ph.D., UCSD, USA Xiaohui Wei, Ph.D., JLU, PRC Hosted by JLU Changchun, Jilin, PRC, Sept 13,
Resource WG Update PRAGMA 8 Singapore. Routine Use - Users make a system work.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by IOIT Hanoi, Vietnam, Oct 29, 2009.
Motivation 1.Application resources setup – make it easy 2.Transform PRAGMA grid – add on demand –Continue using Grid resources –Add cloud resources Current.
Steering Committee Meeting Summary PRAGMA 18 4 March 2010.
Reports from Resource Breakout PRAGMA 15 USM, Malaysia.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by JLU Changchun, Jilin, PRC, Sept 13-15, 2010.
11 Application of CSF4 in Avian Flu Grid: Meta-scheduler CSF4. Lab of Grid Computing and Network Security Jilin University, Changchun, China Hongliang.
PRAGMA 15 (10/24/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resource WG PRAGMA Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resource/data WG Summary Yoshio Tanaka Mason Katz.
1 Bio Applications Virtualization using Rocks Nadya Williams UCSD.
Resource WG Summary Mason Katz, Yoshio Tanaka. Next generation resources on PRAGMA Status – Next generation resource (VM-based) in PRAGMA by UCSD (proof.
Biosciences Working Group Update & Report Back Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by IOIT Hanoi, Vietnam, Oct 29,
CSF4 Meta-Scheduler PRAGMA13 Zhaohui Ding or College of Computer.
A Proposal of Capacity and Performance Assured Storage in The PRAGMA Grid Testbed Yusuke Tanimura 1) Hidetaka Koie 1,2) Tomohiro Kudoh 1) Isao Kojima 1)
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
1 Applications Virtualization in VPC Nadya Williams UCSD.
VC Deployment Script for OpenNebula/KVM Yoshio Tanaka, Akihiko Ota (AIST)
PRAGMA Overview and Future Directions Workshop on Building Collaborations in Clouds, HPC and Applications 17 July 2012.
1 Planetary Network Testbed Larry Peterson Princeton University.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
An OpenFlow based virtual network environment for Pragma Cloud virtual clusters Kohei Ichikawa, Taiki Tada, Susumu Date, Shinji Shimojo (Osaka U.), Yoshio.
PRAGMA9 – Demo Bioinformatics applications inside Gfarm using meta-scheduler (CSF) and local schedulers (LSF/SGE/etc) Dr. Xiaohui Wei, JLU, China Dr. Wilfred.
Progress Since PRAGMA 18 Planning for PRAGMA’s Future PRAGMA – 15 September 2010.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
Minerva Infrastructure Meeting – October 04, 2011.
Virtual Clusters Supporting MapReduce in the Cloud Jonathan Klinginsmith School of Informatics and Computing.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia You-Qiang Song, Ph.D, HKU, PRC Hosted by HKU Hong.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
FIM-related activities and issues being discussed in Japan 1.GEO Grid Yoshio Tanaka (AIST) 2.HPCI, GakuNin Eisaku Sakane, Kento Aida (NII)
NAREGI WP4 (Data Grid Environment) Hideo Matsuda Osaka University.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Cindy Zheng, Pragma Cloud, 3/20/2013 Building The PRAGMA International Cloud Cindy Zheng For Resources Working Group.
PRAGMA: Cyberinfrastructure, Applications, People Yoshio Tanaka (AIST, Japan) Peter Arzberger (UCSD, USA)
PRAGMA 25 Working Group Report Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD)
Resource WG PRAGMA 18 Mason Katz, Yoshio Tanaka.
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
High Performance File System Service for Cloud Computing Kenji Kobayashi, Osamu Tatebe University of Tsukuba, JAPAN.
Simplifying Resource Sharing in Voluntary Grid Computing with the Grid Appliance David Wolinsky Renato Figueiredo ACIS Lab University of Florida.
GEOSS Clearinghouse GEO Web Portal GEOSS Common Infrastructure Components & Services Standards and Interoperability Best Practices Wiki User Requirements.
Reports on Resources Breakouts. Wed. 11am – noon - Openflow demo rehearsal - Show physical information. How easily deploy/configure OpenvSwitch to create.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Server Performance, Scaling, Reliability and Configuration Norman White.
Resources Working Group Update Cindy Zheng (SDSC) Yoshio Tanaka (AIST) Phil Papadopoulos (SDSC)
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
SC2008 (11/19/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Reports.
NTU Cloud 2010/05/30. System Diagram Architecture Gluster File System – Provide a distributed shared file system for migration NFS – A Prototype Image.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
AHM04: Sep 2004 Nottingham CCLRC e-Science Centre eMinerals: Environment from the Molecular Level Managing simulation data Lisa Blanshard e- Science Data.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by UCSD San Diego, USA, March 3, 2010.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
 Session Objectives:  Understand how to upgrade your private cloud: Windows Server 2008 R2  Windows Server 2012 R2 Windows Server 2012  Windows.
Biosciences Working Group Update
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
Dr. John P. Abraham Professor, Computer Engineering UTPA
Presentation transcript:

PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka

Let’s review the past 8 years Good things – It was exciting to build a real Grid infrastructure across pacific rim. – Experiences on building PRAGMA Grid gave us a lot of insights. – Experiments on PRAGMA Grid gave us a lot of valuable input. Bad things – Who used, who are using PRAGMA Grid? – Globus-based Grid is still troublesome to use Need to check application run on every site. – Resources (e.g. network, computing resources) are not enough to motivate users to use PRAGMA Grid.

Proposed direction - migration to Cloud - Build once, run everywhere! What a user should do? – Build and test applications on his local machine. – Create a VM image which is able to run on any cloud resource. What a resource provider should do? – Build VM hosting servers.

VM Replication Experiment SDSC VM hosting server AIST VM hosting server AFG VM (original) AFG VM (copy) VM hosting server: Rocks 5.3 Xen roll Avian Flu Grid VM Rocks VM Globus/SGE Autodock Replication updates hostname and IP Compute nodes Network configurations Globus configuration SGE configuration NBCR VM hosting server AFG VM (copy) VM replication

5 Data Service zone GEO Grid System Zones Laboratory zone software develop application develop Cloud Service zone home archive applications, libraries DEM, GRASS, GDAL, WMS, WCS, WFS, CS-W? WPS Google Service map Network SW Internet CS-W? Frontend service portal, workflow,

Agenda of breakouts Have consensus of migration Figure out possible problems on migrating to Cloud – Does anybody need Globus-based Grid?, i.e. is Globus still mandatory? Figure out missing pieces. What do we need to make PRAGMA Cloud a practical infrastructure? – How make it easy to create VM images? Start with creating baseline VM images – Interaction with data cloud – Meta scheduling / resource brokering – Security – Interoperation between VM hosting servers – … Rough schedule and milestones Detailed discussions will be in this afternoon

Steps 1.Identify sites 1.AIST, UCSD, NCHC, JILIN, Osaka, Indiana 2.Identify what baseline images should be published 1.Publish a baseline Condor image (Phil UCSD by PRAGMA20) 2.Gfarm client in baseline image 3.Independent test is necessary 3.Automatic replication (Solve re-configuration / re-forming problems) (Mason by PRAGMA 20) 1.Hopefully by 4.Consider how to share VM images 1.Build a stable Gfarm filesystem. (UCSD, AIST, Tsukuba) 2.Build and use Gfarm image on Virtual clusters at SDSC and AIST. (by PRAGMA20) 1.physical clusters provided by SDSC/AIST. Tatebe will lead deployment) 5.Interoperability testing 1.Future issues 6.Other research issues to be done (by each institution) 1.Security, meta-scheduling, data, …