Virtualization and Clouds ATLAS position

Slides:



Advertisements
Similar presentations
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Assessment of Core Services provided to USLHC by OSG.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
1 Cloud Services Requirements and Challenges of Large International User Groups Laurence Field IT/SDC 2/12/2014.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
StratusLab is co-funded by the European Community’s Seventh Framework Programme (Capacities) Grant Agreement INFSO-RI Demonstration StratusLab First.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Simone Campana (CERN) Job Priorities: status.
WLCG Operations Coordination Andrea Sciabà IT/SDC GDB 11 th September 2013.
EGI-Engage is co-funded by the Horizon 2020 Framework Programme of the European Union under grant number Federated Cloud Update.
HEPiX Virtualisation working group Andrea Chierici INFN-CNAF Workshop CCR 2010.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Review of the WLCG experiments compute plans
Laurence Field IT/SDC Cloud Activity Coordination
Computing Operations Roadmap
Use of HLT farm and Clouds in ALICE
Sviluppi in ambito WLCG Highlights
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Virtualisation for NA49/NA61
NA61/NA49 virtualisation:
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
LCG 3D Distributed Deployment of Databases
The “Understanding Performance!” team in CERN IT
Future of WAN Access in ATLAS
gLite->EMI2/UMD2 transition
ATLAS Cloud Operations
WLCG Manchester Report
Outline Benchmarking in ATLAS Performance scaling
POW MND section.
WLCG experiments FedCloud through VAC/VCycle in the EGI
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
How to enable computing
David Cameron ATLAS Site Jamboree, 20 Jan 2017
Virtualisation for NA49/NA61
Artem Petrosyan (JINR), Danila Oleynik (JINR), Julia Andreeva (CERN)
Readiness of ATLAS Computing - A personal view
Computing at CEPC Xiaomei Zhang Xianghu Zhao
Abstract Machine Layer Research in VGrADS
Simulation use cases for T2 in ALICE
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
Workshop Summary Dirk Duellmann.
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
WLCG Collaboration Workshop;
Cloud Computing R&D Proposal
Monitoring of the infrastructure from the VO perspective
Xiaomei Zhang On behalf of CEPC software & computing group Nov 6, 2017
Grid Computing 6th FCPPL Workshop
Exploit the massive Volunteer Computing resource for HEP computation
The LHCb Computing Data Challenge DC06
Presentation transcript:

Virtualization and Clouds ATLAS position Simone Campana (CERN-IT/ES) on behalf of ATLAS Computing

Clouds and Virtualization ATLAS R&D project on Cloud and Virtualization evaluate the available cloud technologies in relation to the use-cases presented by ATLAS data management, processing and analysis design a model for transparently integrating cloud computing resources with the ADC software and service stack implement the ATLAS cloud computing model into DDM, PanDA and related tools and services. Dedicated activities on CVMFS and Multicores (see later) R&D coordinators daniel.colin.vanderster@cern.ch fernando.harald.barreiro.megino@cern.ch kaushik@uta.edu rodney.walker@physik.uni-muenchen.de

ATLAS Distributed Computing use cases to be explored Clouds R&D use cases ATLAS Distributed Computing use cases to be explored Monte Carlo simulation on the cloud with stage-out to traditional grid storage or long-term storage on the cloud Data reprocessing in the cloud (with a strong caveat related to cost Distributed analysis on the cloud using data which is accessed remotely to the grid sites, or analysis of data which is located in the cloud Resource capacity bursting which is managed centrally (e.g. to handle urgent reprocessing tasks) or regionally (e.g. to handle urgent local analysis requests)

Clouds R&D plans and milestones Initial Steps Get access to existing academic clouds Work ongoing at Magellan@Argonne and Magellan@LBNL Many other flavors at various locations are being considered. StratusLab@EU, OpenNebula@CERN, Eucalyptus@Edinburgh, Nimbus@CA … Some work going on also on commercial clouds BNL investigating on how to expand capacity into Amazon to absorb peaks Cloud-like environment already in production for ATLAS at CNAF WNoDeS integrated with Panda for simulation jobs. Understand the protocols and working model Both workload and data management Define the scenario for adaptation of existing ATLAS services to clouds Panda, DDM, AGIS, … GOAL: Prototype by end of 2011 for Simulation, Reprocessing and Analysis Kickstart workshop: https://indico.cern.ch/conferenceDisplay.py?confId=136751 talks from ATLAS, CMS, BaBar, CERN IT -- open attendance ----- Meeting Notes (5/10/11 15:41) -----

Clouds and Virtualization The R&D is also responsible in ATLAS to define a model, together with WLCG, for defining and using Virtual Machines at sites. Needs to be defined: Who builds the VM and what gets inside How the VM is instantiated at the site and by who Requirements on the VM (cores, memory, disk cache,…) (Many other bits to be sorted out, once you think about it) Would be good to have one WLCG proposal and discuss it with the VOs

(Virtualization) and Multicores Usage of multicores requires a dedicated discussion And in ATLAS there is a dedicated TF Only “soft” correlation with virtualization Some tests on 8 cores VMs Pushed back till AthenaMP is in production “on the bare metal” (P. Calafiura) Status AthenaMP is ready with caveats to run in a multicore environment Adaptations to the ATLAS framework (Panda) at good stage Successful tests in dedicated queue at CERN_8CORE (“all node”) and other sites ATLAS might be ready to utilize this in production very soon https://indico.cern.ch/getFile.py/access?contribId=7&sessionId=4&resId=0&materialId=slides&confId=119169 Based on outcome, ATLAS might decide to push for the “all node” approach Too early to discuss this now ATLAS Contacts douglas@cern.ch, lblcalaf@gmail.com

CVMFS ATLAS would like to use CVMFS for: Distribution (on and off Grid) of the official ATLAS Software (athena) Distribution of the Computing Client tools (DDM, AGIS, AMI, …) and their dependencies if needed Distribution of the Software Nightlies Distribution of the Condition Data FLATFILES

CVMFS status ATLAS is ready to use CVMFS for 1. and 2. Installation system has been adapted Just configuration changes for Panda queues Repository deployed and supported today by PH-SFT ATLAS is ready to test CVMFS for 3. Repository running on ATLAS VOBOX at CERN ATLAS can test access to Condition Data via CVMFS (4.) “today” Repository contains currently relevant condition files Work being done for automation in adding new conditions Now needs to be evaluated in comparison with “standard” mechanism (“HOTFILES” in the SE) Performance, operational impact, impact on the infrastructure

CVMFS deployment ATLAS has very positive experience with CVMFS Both in testing phase and production ATLAS would like to ask for a broad deployment of CVMFS starting from now. Rollout plan? Ultimately ATLAS understand that the CVMFS service will be run by CERN IT. Timescale? ATLAS contacts douglas.benjamin@cern.ch, desilva@triumf.ca (overall coordination) undrus@bnl.gov (nightlies) Alessandro.DeSalvo@roma1.infn.it (Grid SW installation) Mikhail.Borodin@cern.ch (Condition Data)