Predrag Buncic, CERN/PH-SFT The Future of CernVM.

Slides:



Advertisements
Similar presentations
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

INFSO-RI An On-Demand Dynamic Virtualization Manager Øyvind Valen-Sendstad CERN – IT/GD, ETICS Virtual Node bootstrapper.
Presented by Sujit Tilak. Evolution of Client/Server Architecture Clients & Server on different computer systems Local Area Network for Server and Client.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
By Mihir Joshi Nikhil Dixit Limaye Pallavi Bhide Payal Godse.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
1 port BOSS on Wenjing Wu (IHEP-CC)
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Predrag Buncic (CERN/PH-SFT) CernVM - a virtual software appliance for LHC applications C. Aguado-Sanchez 1), P. Buncic 1), L. Franco 1), A. Harutyunyan.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Predrag Buncic (CERN/PH-SFT) WP9 - Workshop Summary
GAAIN Virtual Appliances: Virtual Machine Technology for Scientific Data Analysis Arihant Patawari USC Stevens Neuroimaging and Informatics Institute July.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
DPHEP Workshop CERN, December Predrag Buncic (CERN/PH-SFT) CernVM R&D Project Portable Analysis Environments using Virtualization.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
ADC3 Workshop 2011 CERN, 19 May Predrag Buncic, CERN/PH-SFT CernVM on the Cloud.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Catalin Condurache STFC RAL Tier-1 GridPP OPS meeting, 10 March 2015.
Predrag Buncic (CERN/PH-SFT) Virtualizing LHC Applications.
K. Harrison CERN, 22nd September 2004 GANGA: ADA USER INTERFACE - Ganga release status - Job-Options Editor - Python support for AJDL - Job Builder - Python.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen Budapest
The CernVM Infrastructure Insights of a paradigmatic project Carlos Aguado Sanchez Jakob Blomer Predrag Buncic.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
2nd ASPERA Workshop May 2011, Barcelona, Spain P. Mato /CERN.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
– Past, Present, Future Volunteer Computing at CERN Helge Meinhard, Nils Høimyr / CERN for the CERN BOINC service team H. Meinhard et al. - Volunteer.
Predrag Buncic (CERN/PH-SFT) Virtualization – the road ahead.
The CernVM Project A new approach to software distribution Carlos Aguado Jakob Predrag
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
NA61 Collaboration Meeting CERN, December Predrag Buncic, Mihajlo Mudrinic CERN/PH-SFT Enabling long term data preservation.
Predrag Buncic (CERN/PH-SFT) CernVM Status. CERN, 24/10/ Virtualization R&D (WP9)  The aim of WP9 is to provide a complete, portable and easy.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
PLATFORM TO EASE THE DEPLOYMENT AND IMPROVE THE AVAILABILITY OF TRENCADIS INFRASTRUCTURE IberGrid 2013 Miguel Caballer GRyCAP – I3M - UPV.
Cloud Computing Application in High Energy Physics Yaodong Cheng IHEP, CAS
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Predrag Buncic (CERN/PH-SFT) Virtualization R&D (WP9) Status Report.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
CHEP 2010 Taipei, 19 October Predrag Buncic Jakob Blomer, Carlos Aguado Sanchez, Pere Mato, Artem Harutyunyan CERN/PH-SFT.
CernVM-FS vs Dataset Sharing
ALICE & Clouds GDB Meeting 15/01/2013
Use of HLT farm and Clouds in ALICE
Update on revised HEPiX Contextualization
Volunteer Computing for Science Gateways
Virtualisation for NA49/NA61
NA61/NA49 virtualisation:
Blueprint of Persistent Infrastructure as a Service
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Introduction to CVMFS A way to distribute HEP software on cloud
Platform as a Service.
Virtualisation for NA49/NA61
A Messaging Infrastructure for WLCG
Implementing CVMFS server using an union file system
Virtualization in the gLite Grid Middleware software process
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
WLCG Collaboration Workshop;
Group 8 Virtualization of the Cloud
X in [Integration, Delivery, Deployment]
Ivan Reid (Brunel University London/CMS)
Cloud Computing: Concepts
Presentation transcript:

Predrag Buncic, CERN/PH-SFT The Future of CernVM

R&D Project in PH Department (WP9), started in 2007, 4 years Aims to provide a complete, portable and easy to configure user environment for developing and running LHC data analysis locally and on the Grid independent of physical software and hardware platform (Linux, Windows, MacOS)  Code check-out, edition, compilation, local small test, debugging, …  Grid submission, data access…  Event displays, interactive data analysis, …  Suspend, resume… Decouple application lifecycle from evolution of system infrastructure Reduce effort to install, maintain and keep up to date the experiment software Web site: CernVM Project: Initial goals

1. Minimal Linux OS (SL5) 2. CernVM-FS - HTTP network file system optimized for jus in time delivery of experiment software 3. Flexible configuration and contextualization mechanism based on public Cloud API Virtualization: CernVM Way

What’s next?

CernVM-FS is the success story…  Scope extended beyond CernVM environment to (all) grid nodes  By now most of the service infrastructure migrated to CERN IT and repositories replicated to 3 off-site locations  Before end of the year all services should move to CERN IT Shared Filesystem Installer Boxes 0 Web Server CERN Strat’ 1 RAL Strat’ 1 BNL Strat’ 1 Stratum 0Stratum 1 Site Squid Site X WN 1 WN 2 NFS Text Locations: CERN IT, CERN PH-SFT, Other

Requests for new features. In pipeline:  ATLAS: support for diskless servers, sharing local cache between repositories, exporting CVMFS via NFS  LHCb : faster turn-over for publishing updates (<1hour)  CMS : Mac OS X support  Small projects: simplified tool set to create repositories  Grid Sites : monitoring, security, encrypted repositories (Nordic Tier 1), extended documentation (RAL), archival of old software releases (CERN) Timescale for this developments: 6 months Natural for SFT to continue providing software maintenance in immediate future (next 2 years)  Main goal will be to ensure stability and performance Foreseen developments

User Server side improvements Shadow Directory Release Manager CernVM-FS Repository read only volume Union Filesystem Overlay Union Filesystem Overlay Redir-FS Kernel module with no support in the futuee Client Server

An Extensible Toolkit for Building Scalable Cloud Computing Infrastructure  Message (XMPP/Jabber) driven system of agents and adapters  Using CernVM as job execution environment An example – 2.0  Volunteer computing cloud based on BOINC, ~1500 active clients at any time CernVM CoPilot

Flexible contextualization is essential if we want to avoid proliferation on VM images At present CernVM supports  Web/XMLRPC based appliance User Interface (for CernVM Basic and Desktop)  HEPIX compliant CDROM contextualization (for CernVM Batch and Head Node)  EC2 API for deployment on public (EC2) and private (OpenNebula, OpenStack, Eucalyptus..) clouds Still work in progress  CDROM contextualization not always obvious and easy to use  EC2 API not completely implemented Contextualization

Based on SL5 distribution with binary RPM packages repackaged into Conary packages New repository label  =>  Entirely hosted at CERN, faster updates  Kept in sync with upstream SL5 repository The images for all hypervisors are now built using our own tool (iBuilder)  Fully replacing a commercial product that we used so far for that purpose (rBuilder) For SL6 compatible CernVM, the intention is to use Conary “capsules”  Allow full encapsulation of foreign package manager  It will be possible to install additional packages side by side using native rpm tool Minimal OS

Long term data preservation NA49/NA61 Goal: Develop prototype of virtual cluster environment suitable to run legacy software in order to support long term data preservation use case Common services hosted by front-end node  Batch master, NAT gateway, storage and HTTP proxy, monitoring Each physical node  Contributes to common storage pool  Runs hypervisor, batch worker  Exports storage local storage to common pool Virtual Machines  Managed by some Cloud middleware (OpenNebula)  Require only limited outgoing network connectivity  Access to data files via POSIX (file system) layer  Built from strongly versioned system components End user API to start VMs and submit jobs HTTP Proxy Storage Proxy NAT Gateway Batch Master 1 CernVM Storage Server Batch Worker Hypervisor 1..n MSS 1..n S/WTCP/IPAPI

Roadmap (in testing)2.6.0 (end of 2011)3.0.0 (mid 2012) New repository label    kept in sync with SL5 repository Final SLC5 release  Updated desktop, latest Xfce  EOS/xrootd for data access CernVM based on SL6  Using encapsulated RPM packages

Define/Update Platform Build Additional Software Test Builds Build Virtual Machine Images Deploy/Manage Publish INTRODUCTION Putting it all together… Virtual Machine Lifecycle

An Extensible Toolkit for Building Scalable Cloud Computing Infrastructure Message (XMPP/Jabber) driven system of agents and adapters Using CernVM as job execution environment An example – 2.0 Volunteer computing cloud based on BOINC, ~1500 active clients at any time ARCHIPEL

Features Virtual Machine and Hypervisor manager written in python and Objective-J Uses a distributed agent/client architecture Provides a powerful browser-based front-end  Requires no server-side scripting  Modular design – Can be extended It uses open-source components It is based on the XMPP (Jabber) protocol  Widely used, and tested (Google Chat, iChat, Facebook Chat)  Can be clustered ARCHIPEL

Archipel Arhitecture Hypervisor A Hypervisor B XMPP Virtual Machine Hypervisor A Virtual Machine Hypervisor A Virtual Machine Hypervisor B Virtual Machine Hypervisor B Virtual Machine Hypervisor B Virtual Machine Hypervisor B XMPP Archipel GUI Repository Image Builder XMPP XMPP Server ejabberd Agents Agent Client Image Tester Agent PicoClient GUI Client BOSH PROGRESS

iBuilder Module PROGRESS

iBuilder Module PROGRESS

iBuilder Module PROGRESS

CernVM PicoClient (iPhone) PROGRESS

CernVM PicoClient (iPhone) PROGRESS

… and iPad CERN, Oct 20,

Conclusions CernVM 4 year R&D is coming to an end…  But the project is still very much alive and is going to continue in some form  Hundreds of physicists use CernVM on their laptops and desktops  CernVM is being used for cloud/performance tests by ATLAS, CMS and LHCb  Development of tools to facilitate full virtual machine lifecycle management is in progress CernVM Spinoffs  CernVM-FS It has definitely found its users and assure support  CernVM CoPilot Used by 2.0 Many thanks to our users (majority of them are from ATLAS) ARCHIPEL