Virtualization in PRAGMA and Software Collaborations Philip Papadopoulos (University of California San Diego, USA)

Slides:



Advertisements
Similar presentations
PRAGMA – TeraGrid – AIST Interoperation Testing Philip Papadopoulos.
Advertisements

INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Kento Aida, Tokyo Institute of Technology Grid Working Group Meeting Aug. 27 th, 2003 Tokyo Institute of Technology Kento Aida.
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
Ronn Ritke Tony McGregor NLANR/MNA (UCSD/SDSC) Funded by the National Science Foundation/CISE/SCI cooperative agreement no. ANI
PRAGMA 17 (10/29/2009) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Motivation 1.Application resources setup – make it easy 2.Transform PRAGMA grid – add on demand –Continue using Grid resources –Add cloud resources Current.
21 st Century Science and Education for Global Economic Competition William Y.B. Chang Director, NSF Beijing Office NATIONAL SCIENCE FOUNDATION.
Steering Committee Meeting Summary PRAGMA 18 4 March 2010.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Daejeon, Korea, March 24, 2009.
PRAGMA 15 (10/24/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Resources.
Resource WG PRAGMA Mason Katz, Yoshio Tanaka, Cindy Zheng.
Cindy Zheng, PRAGMA 8, Singapore, 5/3-4/2005 Status of PRAGMA Grid Testbed & Routine-basis Experiments Cindy Zheng Pacific Rim Application and Grid Middleware.
Biosciences Working Group Update & Report Back Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by IOIT Hanoi, Vietnam, Oct 29,
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
Cindy Zheng, SC2006, 11/12/2006 Cindy Zheng PRAGMA Grid Testbed Coordinator P acific R im A pplication and G rid M iddleware A ssembly San Diego Supercomputer.
Virtualization, Cloud Computing, and TeraGrid Kate Keahey (University of Chicago, ANL) Marlon Pierce (Indiana University)
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
1 Applications Virtualization in VPC Nadya Williams UCSD.
PRAGMA Overview and Future Directions Workshop on Building Collaborations in Clouds, HPC and Applications 17 July 2012.
GLOBAL VIRTUAL CLUSTER DEPLOYMENT THROUGH A CONTENT DELIVERY NETWORK Pongsakorn U-chupala, Kohei Ichikawa (NAIST) Luca Clementi, Philip Papadopoulos (UCSD)
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
An OpenFlow based virtual network environment for Pragma Cloud virtual clusters Kohei Ichikawa, Taiki Tada, Susumu Date, Shinji Shimojo (Osaka U.), Yoshio.
PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.
Clouds, Interoperation and PRAGMA Philip M. Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer Center Calit2.
Building on the BIRN Workshop BIRN Systems Architecture Overview Philip Papadopoulos – BIRN CC, Systems Architect.
Progress Since PRAGMA 19 Planning for PRAGMA’s Future PRAGMA March2011 University of Hong Kong.
Cloud and Virtualization Panel Philip Papadopoulos UC San Diego.
Assessment of Core Services provided to USLHC by OSG.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
PRAGMA21 – PRAGMA 22 Collaborative Activities Resources Working Group.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
Personal Cloud Controller (PCC) Yuan Luo 1, Shava Smallen 2, Beth Plale 1, Philip Papadopoulos 2 1 School of Informatics and Computing, Indiana University.
Advanced Computing and Information Systems laboratory The case for UF in PRAGMA Jose Fortes (also on behalf of Renato Figueiredo and Reed Beaman)
Cindy Zheng, Pragma Cloud, 3/20/2013 Building The PRAGMA International Cloud Cindy Zheng For Resources Working Group.
Pacific Rim International University - Fostering Globally-leading Researchers in Integrated Sciences - Susumu Date Shoji Miyanaga Osaka University, Japan.
PRAGMA: Cyberinfrastructure, Applications, People Yoshio Tanaka (AIST, Japan) Peter Arzberger (UCSD, USA)
PRAGMA 25 Working Group Report Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD)
Status of PRAGMA Activities at KISTI Jongbae Moon 1.
Resource WG PRAGMA 18 Mason Katz, Yoshio Tanaka.
Cloud Computing in NASA Missions Dan Whorton CTO, Stinger Ghaffarian Technologies June 25, 2010 All material in RED will be updated.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
National Institute of Advanced Industrial Science and Technology Introduction of PRAGMA routine-basis experiments Yoshio Tanaka
Celebrating 10 Years Biodiversity, Ecosystems Services, and CI: Potential, Challenges and Opportunities Konkuk University 9 October 2012 Karpjoo Jeong,
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Reports on Resources Breakouts. Wed. 11am – noon - Openflow demo rehearsal - Show physical information. How easily deploy/configure OpenvSwitch to create.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Resources Working Group Update Cindy Zheng (SDSC) Yoshio Tanaka (AIST) Phil Papadopoulos (SDSC)
A Personal Cloud Controller Yuan Luo School of Informatics and Computing, Indiana University Bloomington, USA PRAGMA 26 Workshop.
SC2008 (11/19/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Reports.
Kento Aida, Tokyo Institute of Technology Grid Working Group Aug. 29 th, 2003 Tokyo Institute of Technology Kento Aida.
Biodiversity Data Exchange Using PRAGMA Cloud Umashanthi Pavalanathan, Aimee Stewart, Reed Beaman, Shahir Shamsir C. J. Grady, Beth Plale Mount Kinabalu.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Bringing visibility to food security data results: harvests of PRAGMA and RDA Quan (Gabriel) Zhou, Venice Juanillas Ramil Mauleon, Jason Haga, Inna Kouper,
Pacific Rim Application and Grid Middleware Assembly (PRAGMA)1:
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Sky Computing on FutureGrid and Grid’5000
Virtualization, Cloud Computing, and TeraGrid
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

Virtualization in PRAGMA and Software Collaborations Philip Papadopoulos (University of California San Diego, USA)

Remember the Grid Promise? The Grid is an emerging infrastructure that will fundamentally change the way we think about-and use-computing. The word Grid is used by analogy with the electric power grid, which provides pervasive access to electricity and has had a dramatic impact on human capabilities and society The grid: blueprint for a new computing infrastructure, Foster, Kesselman. From Preface of first edition, Aug 1998

Some Things that Happened on the Way to Cloud Computing Web Version 1.0 (1995) 1 Cluster on Top 500 (June 1998) Dot Com Bust (2000) Clusters > 50% of Top 500 (June 2004) Web Version 2.0 (2004) Cloud Computing (EC2 Beta ) Clusters > 80% of Top 500 (Nov. 2008)

What is fundamentally different about Cloud computing vs. Grid Computing Cloud computing – You adapt the infrastructure to your application – Should be less time consuming Grid computing – you adapt your application to the infrastructure – Generally is more time consuming Cloud computing has a financial model that seems to work – grid never had a financial model – The Grid “Barter” economy only valid for provider-to- provider trade. Pure consumers had no bargaining power

Cloud Hype “Others do all the hard work for you” “You never have to manage hardware again” “It’s always more efficient to outsource” “You can have a cluster in 8 clicks of the mouse” “It’s infinitely scalable” …

Observations “Cloud” is now far enough along that we – Invest time to understand how to best utilize it – Fill in gaps in specific technology to make it easier – Think about scale for parallel scientific Apps Virtual computing has gained enough acceptance that – It should be around for a while – Can be thought of as closer to “electricity”  We are first focusing on IAAS (infrastructure) clouds like EC2, Eucalyptus, OpenNebula, …

One BIG Problem: too many choices

Really of Collaboration: People and Science are Distributed PRAGMA – Pacific Rim Applications and Grid Middleware Assembly – Scientists are from different countries – Data is distributed Use Cyber Infrastructure to enable collaboration When scientists are using the same software on the same data – Infrastructure is no longer in the way – It needs to be their software (not my software)

PRAGMA’s Distributed Infrastructure Grid/Clouds 26 institutions in 17 countries/regions, 23 compute sites, 10VM sites UZH Switzerland NECTEC KU Thailand UoHyd India MIMOS USM Malaysia HKU HongKong ASGC NCHC Taiwan HCMUT HUT IOIT-Hanoi IOIT-HCM Vietnam AIST OsakaU UTsukuba Japan MU Australia KISTI KMU Korea JLU China SDSC USA UChile Chile CeNAT-ITCR Costa Rica BESTGrid New Zealand CNIC China LZU China UZH Switzerland LZU China ASTI Philippines IndianaU USA UValle Colombia

Can PRAGMA do the following? Enable Specialized Applications to run easily on distributed resources Investigate Virtualization as a practical mechanism – Multiple VM Infrastructures (Xen, KVM, OpenNebula, Rocks, WebOS, EC2) Use Geogrid Applications as a first driver of the process

Use GeoGrid Applications as a Driver I am not part of GeoGrid, but PRAGMA members are!

Deploy Three Different Software Stacks on the PRAGMA Cloud QuiQuake – Simulator of ground motion map when earthquake occurs – Invoked when big earthquake occurs HotSpot – Find high temperature area from Satellite – Run daily basis (when ASTER data arrives from NASA) WMS server – Provides satellite images via WMS protocol – Run daily basis, but the number of requests is not stable. Source: Dr. Yoshio Tanaka, AIST, Japan

What are the Essential Steps 1.AIST/Geogrid creates their VM image 2.Image made available in “centralized” storage 3.PRAGMA sites copy Geogrid images to local clouds 1.Assign IP addresses 2.What happens if image is in KVM and site is Xen? 4.Modified images are booted 5.Geogrid infrastructure now ready to use

VM hosting server VM Deployment Phase I - Manual Geogrid + Bloss # rocks add host vm container=… # rocks set host interface subnet … # rocks set host interface ip … # rocks list host interface … # rocks list host vm … showdisks=yes # cd /state/partition1/xen/disks # wget # gunzip geobloss.hda.gz # lomount –diskimage geobloss.hda -partition1 /media # vi /media/boot/grub/grub.conf … # vi /media/etc/sysconfig/networkscripts/ifc… … # vi /media/etc/sysconfig/network … # vi /media/etc/resolv.conf … # vi /etc/hosts … # vi /etc/auto.home … # vi /media/root/.ssh/authorized_keys … # umount /media # rocks set host boot action=os … # Rocks start host vm geobloss… frontend vm-container-0-0 vm-container-0-2 vm-container-…. Geogrid + Bloss vm-container-0-1 VM devel server Website Geogrid + Bloss PRAGMA Early 2011

Gfarm file server Gfarm Client Gfarm meta-server Gfarm file server Centralized VM Image Repository QuickQuake Geogrid + Bloss Nyouga Fmotif Gfarm Client VM images depository and sharing vmdb.txt

VM hosting server VM Deployment Phase II - Automated Geogrid + Bloss Gfarm Cloud $ vm-deploy quiquake vm-container-0-2 vmdb.txt Gfarm Client Quiquake Nyouga Fmotif quiquake, xen-kvm,AIST/quiquake.img.gz,… Fmotif,kvm,NCHC/fmotif.hda.gz,… frontend vm-container-0-0 vm-container-0-2 vm-container-…. vm-container-0-1 vm-deploy Gfarm Client VM development server S Quiquake PRAGMA Late 2011

Condor Pool + EC2 Web Interface 4 different private clusters 1 EC2 Data Center Controlled from Condor Manager in AIST, Japan

PRAGMA Compute Cloud UoHyd India MIMOS Malaysia NCHC Taiwan AIST OsakaU Japan SDSC USA CNIC China LZU China LZU China ASTI Philippines IndianaU USA JLU China Cloud Sites Integrated in Geogrid Execution Pool

Roles of Each Site PRAGMA+Geogrid AIST – Application driver with natural distributed computing/people setup NCHC – Authoring of VMs in a familiar web environment. Significant Diversity of VM infra UCSD – Lower-level details of automating VM “fixup” and rebundling for EC2 We are all founding members of PRAGMA

Rolling Forward Each stage, we learn more We can deploy Scientific VMs across resources in the PRAGMA cloud, but – Networking is difficult – Data is vitally important  PRAGMA Renewal Proposal and Enhanced Infrastructure

Proposal to NSF to Support US Researchers in PRAGMA Shared Experimentation Driving Development Persistent, Transitory Infusing New Ideas Building on Our Successes

Driven by “Scientific Expeditions” Expedition: focus on putting distributed infrastructure builders and application scientists together Our proposal described three specific scientific expedition areas for US participation: – Biodiversity (U. Florida, Reed Beaman) – Global Lake Ecology (U. Wisc, Paul Hansen) – Computer Aided Drug Discovery (UCSD, Arzberger + ) IMPORTANT: Our proposal could describe only some of the drivers and infrastructure that PRAGMA works on together as a group

“Infrastructure” Development and Support. Significant Expansion in # of US Data Sharing, Provenance, Data Valuation and Evolution Experiments: – Beth Plale, Indiana U Overlay Networks, Experiments with IPv6 10 – Jose Fortes, Renato Figueiredo, U Florida VM Mechanics Multi-Site, Multi-Environment VM Control and Monitoring – Phil Papadopoulos, UCSD Sensor Activities: From Expeditions to Infrastructure – Sameer Tilak, UCSD

Building on What We’ve been working on together: VMs + Overlay Networks + Data

Add Overlay Networking In Our Proposal – Led by U Florida (Jose’ Fortes, Renato Figueiredo) (VinE and IPOP) – Extend to IPV6 Overlays Not in our proposal but we are already supporting experiments – OpenVswitch led Osaka, AIST (PRAGMA Demos, March 2012) Virtual Network architecture based on deployment of user- level virtual routers (VRs). Multiple mutually independent virtual networks can be overlaid on top of the Internet. VRs control virtual network traffic, and transparently perform firewall traversal

Refine Focus on Data Products and Sensing Data Integration and tracking how data evolves in PRAGMA – Led by Beth Plale, Indiana University – “develop analytics and provenance capture techniques that result in data valuation metrics that can be used to make decisions about which data objects should be preserved over the long term and which that should not” Sensor data infrastructure – Led By Sameer Tilak, UCSD – utilize the proposed PRAGMA infrastructure as an ideal resource to evaluate and advance sensor network cyberinfrastructure. – Capitalizes on established history of working across PRAGMA and GLEON (with NCHC, Thailand, and others)

Other Opportunities: US-China Specific

Workshop Series Two Workshops – Sep 2011 (Beijing) – March 2012 (San Diego) – Approximately 40 participants at each Workshop Explore how to catalyze collaborative software development between US and China – Exascale Software – Trustworthy Software – Software for Emerging Hardware Architectures

Start of a more formal approach to bilateral collaboration

Thank you!