Catalin Condurache STFC RAL Tier-1 GridPP OPS meeting, 10 March 2015.

Slides:



Advertisements
Similar presentations
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Advertisements

CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
Security Middleware and VOMS service status Andrew McNab Grid Security Research Fellow University of Manchester.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
1 Resource Provisioning Overview Laurence Field 12 April 2015.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP Computing for Particle.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
WLCG operations A. Sciabà, M. Alandes, J. Flix, A. Forti WLCG collaboration workshop July , Barcelona.
GridPP Dirac Service The 4 th Dirac User Workshop May 2014 CERN Janusz Martyniak, Imperial College London.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
Security Policy Update David Kelsey UK HEP Sysman, RAL 1 Jul 2011.
HEPiX IPv6 Working Group David Kelsey GDB, CERN 11 Jan 2012.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
The GridPP DIRAC project DIRAC for non-LHC communities.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
LCG Issues from GDB John Gordon, STFC WLCG MB meeting September 28 th 2010.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
CERN IT Department CH-1211 Geneva 23 Switzerland t ES 1 how to profit of the ATLAS HLT farm during the LS1 & after Sergio Ballestrero.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
The GridPP DIRAC project DIRAC for non-LHC communities.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Predrag Buncic (CERN/PH-SFT) CernVM Status. CERN, 24/10/ Virtualization R&D (WP9)  The aim of WP9 is to provide a complete, portable and easy.
CERN - IT Department CH-1211 Genève 23 Switzerland t IT-GD-OPS attendance to EGEE’09 IT/GD Group Meeting, 09 October 2009.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
CernVM-FS – Best Practice to Consolidate Global Software Distribution Catalin CONDURACHE, Ian COLLIER STFC RAL Tier-1 ISGC15, Taipei, March 2015.
HEPiX IPv6 Working Group David Kelsey david DOT kelsey AT stfc DOT ac DOT uk (STFC-RAL) HEPiX, Vancouver 26 Oct 2011.
Considerations on Using CernVM-FS for Datasets Sharing Within Various Research Communities Catalin Condurache STFC RAL UK ISGC, Taipei, 18 March 2016.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Status of ARGUS support Peter Solagna – EGI.eu.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
CernVM-FS Operations in the CERN IT Storage Group Dan van der Ster (CERN IT-ST) CernVM Users Workshop 6-8 June D. van der Ster2.
Accounting Update John Gordon. Outline Multicore CPU Accounting Developments Cloud Accounting Storage Accounting Miscellaneous.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI A pan-European Research Infrastructure supporting the digital European Research.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
EGI-Engage is co-funded by the Horizon 2020 Framework Programme of the European Union under grant number Federated Cloud Update.
Predrag Buncic, CERN/PH-SFT The Future of CernVM.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
CernVM-FS vs Dataset Sharing
WLCG IPv6 deployment strategy
Use of HLT farm and Clouds in ALICE
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
ATLAS Cloud Operations
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Introduction to CVMFS A way to distribute HEP software on cloud
How to enable computing
Steven Newhouse, EGI.eu EGI-InSPIRE Project Director
IPv6 update Duncan Rand Imperial College London
Presentation transcript:

Catalin Condurache STFC RAL Tier-1 GridPP OPS meeting, 10 March 2015

Short history September 2012 – non-LHC Stratum-0 at RAL gridpp.ac.uk CVMFS domain initially for UK VOs, then extended August 2013 – CVMFS TF kick-off meeting to establish a CVMFS infrastructure that allows EGI VOs to use it as a standard method of distribution of their software at grid sites

Recent developments Domain gridpp.ac.uk less likely to continue requests to create, host and publish repositories for various EGI VOs (not only UK) Following CVMFS WG meeting (March 2014) it was agreed that egi.eu would be the new CVMFS domain for new requests Work started to migrate gridpp.ac.uk -> egi.eu existing gridpp.ac.uk repositories duplicated as egi.eu

– current status 15 ‘established’ repositories (only 5 at kick-off) hosted and published at RAL – ~500 GB – egi.eu 6 ‘emerging’ repos – incl. UK reg. VOs (gridpp.ac.uk) List of VOs supported (egi.eu and gridpp.ac.uk) -biomed, cernatschool.org, glast.org, hone, hyperk.org, km3net.org, mice, na62.vo.gridpp.ac.uk, pheno, phys.vo.ibergrid.eu, snoplus.snolab.ca, t2k.org, enmr.eu List of VOs and research groups supported (egi.eu only) - auger, comet.j-parc.jp, ligo, supernemo List of VOs supported (gridpp.ac.uk only) - vo.londongrid.ac.uk, vo.scotgrid.ac.uk, vo.southgrid.ac.uk, vo.northgrid.ac.uk

RAL infrastructure Stratum-0 – VM, 20GB RAM, 1.2TB HDD CVMFS uploader – VM, 8GB RAM, 1TB HDD Stratum-1 – HA 2-node cluster, metal boxes, 32GB, 12TB cvmfs-egi it replicates egi.eu, opensciencegrid.org, desy.de, nikhef.nl plan to integrate with WLCG Stratum-1 (cernvmfs) Squid machines are shared for Frontier and LHC/non-LHC CVMFS access

Extended EGI infrastructure Stratum-0 at DESY and NIKHEF (not as egi.eu) 31 repos replicated at RAL (non-LHC, non-OSG) Other Stratum-1 replicas for egi.eu at NIKHEF, ASGC, TRIUMF Big help with cvmfs-keys v1.5 egi.eu configured by default more help with cvmfs v no longer CERN-centric configuration new puppet module available gridpp.ac.uk to be phased out outside UK soon will remain in use for geographically UK VOs only

EGI + OSG CVMFS infrastructure Stratum-0 NIKHEF nikhef.nl Stratum-1 NIKHEF Proxy Hierarchy Stratum-1 CERN Proxy Hierarchy Stratum-0 DESY desy.de Stratum-1 RAL Stratum-0 RAL egi.eu Proxy Hierarchy Stratum-1 DESY Proxy Hierarchy Stratum-1 ASGC Stratum-1 TRIUMF Stratum-0 OSG opensciencegrid.org

Software installation mechanism at RAL CernVM Users Workshop, CERN, March 2015 /home/augersgm /home/biomedsgm. /home/t2ksgm GSI Interface CVMFS Uploader Stratum-0 GSIssh/scp DN credentials VOMS Role credentials VO Software Grid Manager /cvmfs/auger.egi.eu /cvmfs/biomed.egi.eu. /cvmfs/t2k.egi.eu

HOWTO new repo at RAL Stratum-0 Request at For registered VOs access on CVMFS uploader based on DNs and/or VOMS role For research groups membership of UK regional VOs recommended if grid access is required repository created within gridpp.ac.uk space repository could be moved later within egi.eu (once VO registered)

Proposal for rsync of repos at RAL Stratum-0 Currently CVMFS uploader contains the master copies of all repositories – used space slowly increasing Could VOs keep their own master copies locally? Then CVMFS uploader (or Stratum-0) would just check and rsync if necessary

Catalin’s requests to all CVMFS sites Please check that you are at cvmfs v Please, please have cvmfs-keys v1.5-1 installed Please update to the latest cvmfs-puppet module v0.3.3 Be ready to update to cvmfs v2.1.20

CernVM-FS Users Workshop, 5-6 March, CERN

Status and roadmap of CernVM - CernVM virtual appliance – complete and portable environment for developing and running HEP data processing tasks - Use cases - IaaS Clouds: AWS community image, CERN OpenStack - Development environment: Xfce graphical UI - Volunteer computing: Test4Theory - Long-term analysis preservation: ALEPH software - Outreach and education: CERN OpenData portal - Future plans - Lightweight virtualization: containers in CernVM (Docker and lxc)

Status and roadmap of CernVM Future plans - Lightweight virtualization: containers in CernVM (Docker and lxc) - Security Updates - Contextualization - SL7 support

Status and roadmap of CernVM-FS - CernVM-FS – critical WLCG service, increasing number of non-LHC repos -Repository server migration to 2.1.x -Consolidation configuration of CernVM-FS - Disentangle CernVM-FS from CERN-specific configuration - Simplify CernVM-FS client configuration - Allow for 3rd party configuration packages - Facilitate support for non-HEP VOs

Status and roadmap of CernVM-FS New configuration methods in CernVM-FS Introduction of cvmfs-config-packages - Ability to use configuration repositories - Automatic location aware ordering of Stratum1 servers - Push replication - Stratum-0 will actively announce updates to Stratum-1s - Will significantly lower the update dissemination latency to CVMFS clients

Feedback from the LHC experiments Feedback from the users community - OSG and EGI – similar interface for users for maintaining the repositories - Asia - requests for ACL - notes from GridPP user engagement programme – TomW CVMFS as a high speed filesystem for auxiliary data Volunteer computing projects at CERN

Technological trends - Docker containers in distributed applications Sebastien Goasguen (Citrix) - Opportunistic computing for CMS at large scale Douglas Thain (Univ Notre Dame) - Hybrid cloud environments and networking on AWS Giulio Soro (AWS) - Big data in the Cloud – processing and performance Anthony Voellm (Google)