Ian Gable University of Victoria 1 Deploying HEP Applications Using Xen and Globus Virtual Workspaces A. Agarwal, A. Charbonneau, R. Desmarais, R. Enge,

Slides:



Advertisements
Similar presentations
Open Science Grid Living on the Edge: OSG Edge Services Framework Kate Keahey Abhishek Rana.
Advertisements

A Scalable Approach to Deploying and Managing Appliances Kate Keahey Rick Bradshaw, Narayan Desai, Tim Freeman Argonne National Lab, University of Chicago.
Virtualization, Cloud Computing, and TeraGrid Kate Keahey (University of Chicago, ANL) Marlon Pierce (Indiana University)
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Virtual Workspaces in the Grid Kate Keahey Argonne National Laboratory Ian Foster, Tim Freeman, Xuehai Zhang, Daniel Galron.
Nimbus or an Open Source Cloud Platform or the Best Open Source EC2 No Money Can Buy ;-) Kate Keahey Tim Freeman University of Chicago.
Towards a GRID Operating System: from GLinux to a Pervasive GVM Domenico TALIA DEIS University of Calabria ITALY CoreGRID Workshop.
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
1 Applications Virtualization in VPC Nadya Williams UCSD.
Trustworthy and Personalized Computing Christopher Strasburg Department of Computer Science Iowa State University November 12, 2008.
Environmental Council of States Network Authentication and Authorization Services The Shared Security Component February 28, 2005.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
INFSO-RI An On-Demand Dynamic Virtualization Manager Øyvind Valen-Sendstad CERN – IT/GD, ETICS Virtual Node bootstrapper.
Grid Architecture Grid Canada Certificates International Certificates Grid Canada Issued over 2000 certificates Condor G Resource TRIUMF.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF.
Ian Gable University of Victoria/HEPnet Canada 1 GridX1: A Canadian Computational Grid for HEP Applications A. Agarwal, P. Armstrong, M. Ahmed, B.L. Caron,
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Group Members: Abhishek Bajaj: MT Anargha Biswas: MT Ambarish Kumar: MT Deepak Porwal: MT Kirti Wadehra: MT
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
Virtual Infrastructure in the Grid Kate Keahey Argonne National Laboratory.
1 port BOSS on Wenjing Wu (IHEP-CC)
Ashok Agarwal 1 BaBar MC Production on the Canadian Grid using a Web Services Approach Ashok Agarwal, Ron Desmarais, Ian Gable, Sergey Popov, Sydney Schaffer,
Grid Canada CLS eScience Workshop 21 st November, 2005.
Model a Container Runtime environment on Your Mac with VMware AppCatalyst VMworld Fabio Rapposelli
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Secure & flexible monitoring of virtual machine University of Mazandran Science & Tecnology By : Esmaill Khanlarpour January.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
Daniel Vanderster University of Victoria National Research Council and the University of Victoria 1 GridX1 Services Project A. Agarwal, A. Berman, A. Charbonneau,
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks C. Loomis (CNRS/LAL) M.-E. Bégin (SixSq.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Carrying Your Environment With You or Virtual Machine Migration Abstraction for Research Computing.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
05/29/2002Flavia Donno, INFN-Pisa1 Packaging and distribution issues Flavia Donno, INFN-Pisa EDG/WP8 EDT/WP4 joint meeting, 29 May 2002.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks C. Loomis (CNRS/LAL) M.-E. Bégin (SixSq.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Copyright © cs-tutorial.com. Overview Introduction Architecture Implementation Evaluation.
Virtual Workspaces Kate Keahey Argonne National Laboratory.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Improving Xen Security through Disaggregation Derek MurrayGrzegorz MilosSteven Hand.
Internet2 AdvCollab Apps 1 Access Grid Vision To create virtual spaces where distributed people can work together. Challenges:
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Michał Jankowski, Paweł Wolniewicz, Jiří Denemark, Norbert Meyer,
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
National Energy Research Scientific Computing Center (NERSC) CHOS - CHROOT OS Shane Canon NERSC Center Division, LBNL SC 2004 November 2004.
Condor + Cloud Scheduler Ashok Agarwal, Patrick Armstrong, Andre Charbonneau, Ryan Enge, Kyle Fransham, Colin Leavett-Brown, Michael Paterson, Randall.
Nanbor Wang, Balamurali Ananthan Tech-X Corporation Gerald Gieraltowski, Edward May, Alexandre Vaniachine Argonne National Laboratory 2. ARCHITECTURE GSIMF:
Grid testing using virtual machines Stephen Childs*, Brian Coghlan, David O'Callaghan, Geoff Quigley, John Walsh Department of Computer Science Trinity.
The Institute of High Energy of Physics, Chinese Academy of Sciences Sharing LCG files across different platforms Cheng Yaodong, Wang Lu, Liu Aigui, Chen.
10 March Andrey Grid Tools Working Prototype of Distributed Computing Infrastructure for Physics Analysis SUNY.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Towards Dynamic Database Deployment LCG 3D Meeting November 24, 2005 CERN, Geneva, Switzerland Alexandre Vaniachine (ANL)
OSCAR Symposium – Quebec City, Canada – June 2008 Proposal for Modifications to the OSCAR Architecture to Address Challenges in Distributed System Management.
Workspace Management Services Kate Keahey Argonne National Laboratory.
TUF: Secure Software Updates Justin Cappos NYU Poly Computer Science and Engineering.
CERN Openlab Openlab II virtualization developments Havard Bjerke.
Use of HLT farm and Clouds in ALICE
Virtualization Review and Discussion
Blueprint of Persistent Infrastructure as a Service
Virtualization in the gLite Grid Middleware software process
Grid Canada Testbed using HEP applications
Virtualization, Cloud Computing, and TeraGrid
Using and Building Infrastructure Clouds for Science
Presentation transcript:

Ian Gable University of Victoria 1 Deploying HEP Applications Using Xen and Globus Virtual Workspaces A. Agarwal, A. Charbonneau, R. Desmarais, R. Enge, I. Gable, D. Grundy, A. Norton, D. Penefold-Brown, R. Seuster, R.J. Sobie, D. C. Vanderster Institute of Particle Physics of Canada National Research Council, Ottawa, Ontario, Canada University of Victoria, Victoria, British Columbia, Canada

Ian Gable University of Victoria 2 Overview Motivation –Can’t access some resources Virtual Machines on the Grid Example Deployment Results

Ian Gable University of Victoria 3 The Problem In Canada we have computing resources we can’t use. Why?

Ian Gable University of Victoria 4 Virtualization on the Grid Virtualization is the solution. We can package an application complete with all of it’s dependencies and move it out to a remote resource. Real Machine Virtual Machine

Ian Gable University of Victoria 5 Virtualization for HEP Apps on the Grid Find a virtual machine technology Need a middleware Movement of Images Security

Ian Gable University of Victoria 6 VM: Xen is Useful for HEP Xen is a Virtual Machine technology that offers negligible performance penalties unlike more familiar VM systems like VMware. Xen uses a technique called “paravirtualization” to to allow most instructions to run at their native speed. –The penalty is that you must run a modified OS kernel –Linus says Xen included in Linux Kernel mainline as of “Evaluation of Virtual Machines for HEP Grids”, Proceedings of CHEP 2006, Mumbai India.

Ian Gable University of Victoria 7 Middleware: Globus Virtual Workspaces We first tried developing our own in house solution –Set of simple Perl scripts to boot VMs on demand. –Not well integrated with middleware, non-standard interface. –Rewrite for every cluster. Globus Virtual Workspaces –Globus Project from Mathematics and Computer Science Division of Argonne National Laboratory. –Uses the Globus Toolkit Version 4 to present a Web Services Interface for the deployment and management of VMs on remote clusters. –Runs like any other Globus 4 Service. –In early stages of development, technology preview release available.

Ian Gable University of Victoria 8 Movement of Images Allow virtual machines to be deployed and managed on remote GT4 clusters.

Ian Gable University of Victoria 9 Security Are you giving root away on your clusters? –root on domU != root on dom0. Sandboxing –Globus Virtual Workspaces helps. VMs are booted on BEHALF of users. –Different networking sandbox strategies available. –We experimented successfully with each worknode NATing its virtual workernodes. Authentication –Can you verify the source of your image?

Ian Gable University of Victoria 10 Image Singning First Steps We need to verify that the images come from people we trust. –Signatures using grid certificates. –For VM we run a hash algorithm (sha1) on the image and sign the hash. The group allowed to execute VMs doesn’t have to be the same as the group allowed to build them. Quite Simple: VM Signers VM Executors $ openssl x509 -in ~/.globus/usercert.pem -pubkey -noout > pubkey.pem $ openssl dgst -sha1 -sign ~/.ssh/userkey.pem -out vm_image.sha1 vm_image.img $ openssl dgst -sha1 -verify pub.pem -signature vm_image.sha1 vm_image.img

Ian Gable University of Victoria 11 Experiences Building Images Test Deployment Results Future Work

Ian Gable University of Victoria 12 Test Deployments Goal Deploy an example HEP application using Globus Virtual Workspaces. Configuration Deployed Globus Virtual Workspaces on two separate clusters. –Scientific Linux(SL) 5.0, i686 machines at the University of Victoria –SuSe 10.0 i686 machines at the National Research Council in Ottawa Application is the ATLAS Distribution Kit –Selected because it was familiar to us.

Ian Gable University of Victoria 13 Where do we get the VMs? To get the additional flexibility of deploying applications within Virtual Machines we suffer the penalty of having to build the VM first. Building virtual machines can be a hurdle. –If it isn’t easy people won’t do it. Several possible approaches. –Give users the tools to easily build their own images. –Provide users with pre-built images which they can customize.

Ian Gable University of Victoria 14 Building Virtual Machines There are many new tools for building images. SL 5.0 now includes the RedHat Tool ‘virt-manager’ for the creation of Virtual Machines

Ian Gable University of Victoria 15 Other Sources of Images Projects like the CERN OS Farm endeavor to create images on the fly at users request. Experiments could release pre-certified VM complete with installed application.

Ian Gable University of Victoria 16 Test Deployment Image Repository Workspace Client GT4 Cluster Headnode GT4 Cluster Headnode University of Victoria dom0 domU dom0 Worker Nodes domU National Research Council, Ottawa

Ian Gable University of Victoria 17 Results Jet simulation and reconstruction performed using the ATLAS kit shipped inside a SL 4.5 image to a remote SL 5.0 cluster. Image booted on SuSe cluster. Result Verified using ATLAS Run Time Test (RTT). As Globus Virtual Workspaces is in the early stages of development many stability problems were encountered as expected.

Ian Gable University of Victoria 18 Conclusion VMs could allow HEP access to resources it couldn’t have accessed before. Globus Virtual Workspace is in the early stages of providing a mechanism deploy VMs using existing Grid Technologies. Security mechanisms for VMs needs more research. Future Work –LRMS integration. –Image signing. –Local Image caching.