Use of HLT farm and Clouds in ALICE

Slides:



Advertisements
Similar presentations
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Advertisements

1 Applications Virtualization in VPC Nadya Williams UCSD.
An Approach to Secure Cloud Computing Architectures By Y. Serge Joseph FAU security Group February 24th, 2011.
Jharrod LaFon (HPC-3) Jim Williams (HPC-3) 2011 Computer System, Cluster, and Networking Summer Institute Russell Husted (MTU) Derek Walker (NCA&TSU) Povi.
System Center 2012 Setup The components of system center App Controller Data Protection Manager Operations Manager Orchestrator Service.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
1 port BOSS on Wenjing Wu (IHEP-CC)
Cloud Computing. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
Creating an EC2 Provisioning Module for VCL Cameron Mann & Everett Toews.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Introduction to Cloud Computing
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Network Plus Virtualization Concepts. Virtualization Overview Virtualization is the emulation of a computer environment called a Virtual Machine. A Hypervisor.
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
Installation and Development Tools National Center for Supercomputing Applications University of Illinois at Urbana-Champaign The SEASR project and its.
Simplifying Resource Sharing in Voluntary Grid Computing with the Grid Appliance David Wolinsky Renato Figueiredo ACIS Lab University of Florida.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
CernVM WebAPI CernVM Users Workshop 2015 Ioannis Charalampidis, PH/SFT CERN, 5 March 2015.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
European Organization for Nuclear Research Virtualization Review and Discussion Omer Khalid 17 th June 2010.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Predrag Buncic (CERN/PH-SFT) Virtualizing LHC Applications.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen Budapest
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
Predrag Buncic (CERN/PH-SFT) CernVM Status. CERN, 24/10/ Virtualization R&D (WP9)  The aim of WP9 is to provide a complete, portable and easy.
Information Initiative Center, Hokkaido University North 11, West 5, Sapporo , Japan Tel, Fax: General.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Towards an Information System Product Team.
VIRTUAL MACHINE – VMWARE. VIRTUAL MACHINE (VM) What is a VM? – A virtual machine (VM) is a software implementation of a computing environment in which.
Predrag Buncic, CERN/PH-SFT The Future of CernVM.
CHEP 2010 Taipei, 19 October Predrag Buncic Jakob Blomer, Carlos Aguado Sanchez, Pere Mato, Artem Harutyunyan CERN/PH-SFT.
Intro To Virtualization Mohammed Morsi
Project Cumulus Overview March 15, End Goal Unified Public & Private PaaS for GlassFish/Java EE Simplify deployment of Java EE Apps on top of.
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
PaaS services for Computing and Storage
Course: Cluster, grid and cloud computing systems Course author: Prof
Laurence Field IT/SDC Cloud Activity Coordination
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
Virtualization in Grid Rock
ALICE & Clouds GDB Meeting 15/01/2013
StratusLab First Periodic Review
Virtualization and Clouds ATLAS position
Virtualisation for NA49/NA61
NA61/NA49 virtualisation:
SUSE® Cloud The Open Source Private Cloud Solution for the Enterprise
Blueprint of Persistent Infrastructure as a Service
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
Quattor in Amazon Cloud
In-Depth Introduction to Docker
Virtualisation for NA49/NA61
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
WLCG Collaboration Workshop;
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Outline Virtualization Cloud Computing Microsoft Azure Platform
HC Hyper-V Module GUI Portal VPS Templates Web Console
OPS235: Lab 2 Virtual Machines – Part I
Networking for Home and Small Businesses – Chapter 2
Module 01 ETICS Overview ETICS Online Tutorials
Ivan Reid (Brunel University London/CMS)
Networking for Home and Small Businesses – Chapter 2
Networking for Home and Small Businesses – Chapter 2
Presentation transcript:

Use of HLT farm and Clouds in ALICE pre-GDB Meeting 15/01/2013 Predrag.Buncic@cern.ch

USE OF HLT FARMS and CLOUDS in ALICE Strong request by C-RRB Computing capacity equivalent to US-ALICE share Approach based on CernVM No need to be smarter than the others Requires centralized software distribution using CVMFS Universal namespace, easy configuration Deploy and configure once, run everywhere No need to ever remove software Long term data preservation friendly approach Extra benefit for ALICE users CernVM on laptop/desktop Background

BIG PICTURE Private CERN Cloud HLT Public Cloud AliEn Cloud Gateway QA CAF On Demand CernVM/FS Cloud Agent Public Cloud HLT

CernVM Cloud Implementation Provides a simple interface to deploy any number of VM instances in arbitrary cloud infrastructures Cloud Aggregator Composed of three components: CernVM Online Portal CernVM Cloud Agents CernVM Cloud Gateway Implementation

CernVM Online Cluster definition

CernVM Cloud Gateway CernVM Online CernVM Gateway Create context definition for each service Create a cluster definition : Required offerings ( compute, disk, etc... ) Minimum number of instances Dependencies on other services Instantiate a cluster based on it’s definition Scale individual services (REST API) Provides a user-friendly interface to manage the deployed cloud infrastructures. Adaptive layout in the GUI that adapts seamlessly in mobile devices. All the functionality is exposed via a REST API.

Overview Instances Cloud agents expose the resources available in the cloud. The users request resources via the gateway interface. The site administrators only takes care of the installation. Cloud Agent Public Cloud Cloud Agent Private Cloud Cloud Agent System user Site Administrator Cloud Agent Machine Browser / Tools Cloud Gateway

CernVM Cloud Agent Implemented adapters for: CloudStack ( native API ) libvirt ( XEN, KVM, VirtualBox, ... ) EC2 tools ( command-line tools wrapper ) OCCI API in development CloudStack adapter tested with our infrastructure Libvirt adapter tested on Alice HLT

Testing deployment on ALICE HLT Alice HLT Deployment Batch nodes: Using the default setup, behind NAT Head nodes: Using bridged network configuration. 4 Preallocated MACs / IPs Why VMs? Incompatible OS in the machine. Machine Machine Batch Nodes (Virtual) 192.168.1.x/24 Head Nodes (Virtual) 10.162.41.10/21 NAT Bridge XMPP Server 137.138.198.215 GW 10.162.40.28/21 fepdev11 10.162.40.27/21 fepdev10

Conclusions We managed to successfully start a cluster and control instances. We have set-up a CVMFS repository Need to adapt AliEn to use CVMFS Simulation, reconstruction, calibration… Similar solution can be applied to badly needed on-demand QA cluster(s) for release testing CAF on demand T3 in the box Willing to work with other experiments and CernVM team towards a common solution