CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CERN-IT Update Ian Bird On behalf of IT Multi-core and Virtualisation Workshop, 21-22.

Slides:



Advertisements
Similar presentations
System Center 2012 R2 Overview
Advertisements

LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010.
“It’s going to take a month to get a proof of concept going.” “I know VMM, but don’t know how it works with SPF and the Portal” “I know Azure, but.
WLCG Cloud Traceability Working Group progress Ian Collier Pre-GDB Amsterdam 10th March 2015.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
Virtualization for Cloud Computing
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
5205 – IT Service Delivery and Support
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Ewan Roche, Ulrich Schwickerath, Manuel Guijarro,
INTRODUCTION TO CLOUD COMPUTING Cs 595 Lecture 5 2/11/2015.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Windows Server 2008 R2 CSIT 320 (Blum) 1. Server Consolidation – Today’s chips have enhanced capabilities compared to those of the past. In particular.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
System Center 2012 Setup The components of system center App Controller Data Protection Manager Operations Manager Orchestrator Service.
Tanenbaum 8.3 See references
Assessment of Core Services provided to USLHC by OSG.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
CERN IT Department CH-1211 Genève 23 Switzerland t Virtualization with Windows at CERN Juraj Sucik, Emmanuel Ormancey Internet Services Group.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
Computing trends for WLCG Ian Bird ROC_LA workshop CERN; 6 th October 2010.
CERN IT Department CH-1211 Genève 23 Switzerland t Evolution of virtual infrastructure with Hyper-V Juraj Sucik, Slavomir Kubacka Internet.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Jose Castro Leon CERN – IT/OIS CERN Agile Infrastructure Infrastructure as a Service.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
Biomedical Big Data Training Collaborative biobigdata.ucsd.edu BBDTC UPDATES Biomedical Big Data Training Collaborative biobigdata.ucsd.edu.
CERN IT Department CH-1211 Genève 23 Switzerland t The Agile Infrastructure Project Part 1: Configuration Management Tim Bell Gavin McCance.
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS CERN Virtual Infrastructure Status update.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
Web Technologies Lecture 13 Introduction to cloud computing.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
CERN - IT Department CH-1211 Genève 23 Switzerland t Operating systems and Information Services OIS Proposed Drupal Service Definition IT-OIS.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Juraj Sucik, Michal Kwiatek, Rafal.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Improving resilience of T0 grid services Manuel Guijarro.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Virtual Server Server Self Service Center (S3C) JI July.
Cloud Computing Application in High Energy Physics Yaodong Cheng IHEP, CAS
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
HEPiX Virtualisation working group Andrea Chierici INFN-CNAF Workshop CCR 2010.
CERN IT Department CH-1211 Genève 23 Switzerland PES Virtualization at CERN – a status report Virtualization at CERN – status report Sebastien.
CERN IT Department CH-1211 Genève 23 Switzerland The CERN internal Cloud Sebastien Goasguen, Belmiro Rodrigues Moreira, Ewan Roche, Ulrich.
WLCG IPv6 deployment strategy
The advances in IHEP Cloud facility
Sviluppi in ambito WLCG Highlights
Virtualization and Clouds ATLAS position
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
PES Lessons learned from large scale LSF scalability tests
WLCG Collaboration Workshop;
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Update Ian Bird On behalf of IT Multi-core and Virtualisation Workshop, June 2010

CERN IT Department CH-1211 Genève 23 Switzerland t 2 Multi-Core Update

CERN IT Department CH-1211 Genève 23 Switzerland t 3 Multi-core jobs at CERN Multi-core jobs can be run on the CERN batch service: –Dedicated queue “sftcms” –dedicated resources 2 nodes with 8 cores, one job slot each –access restricted to pre-defined users Currently 12 registered users defined to use the queue –Service is in production since February 2010 Usage statistics: –Total number of jobs run ever in this queue: 92 Last access: 19 May 2010 –Number of active users ever: 3, ~80% by one user: Gaudi Framework for LHCb simulation and reconstruction jobs. Ability to submit one job rather than several (as needed in the past). All parallelism happens within the 8 cores of a single worker node.

CERN IT Department CH-1211 Genève 23 Switzerland t 4 Multi-core jobs Grid-wide EGEE-III MPI WG recommendations –Current draft at –Final version to appear soon Next steps 1.Validate new attributes in the CREAM-CE: Using a temporary JDL attribute (CERequirements) –E.g. CERequirements = “WholeNodes = \”True\””; Using direct job submission to CREAM Validation starting with some user communities 2.Modify CREAM and BLAH so that the new attributes can be used as first-level JDL attributes: E.g. WholeNodes = True; Coming with CREAM 1.7, Q Support for the new attributes in WMS Coming with WMS 3.3, Q3 2010

CERN IT Department CH-1211 Genève 23 Switzerland t 5 Virtualisation: Directions in CERN-IT

CERN IT Department CH-1211 Genève 23 Switzerland t Virtualisation Activities In many areas – not all directly relevant for LHC computing 3 main areas: –Service consolidation “VOBoxes”, VMs on demand, LCG certification testbed –“LX” services Worker nodes, pilots, etc (issue of “bare” WN tba) –Cloud interfaces Rationale: –Better use of resources – optimise cost, power, efficiency –Reduce dependencies esp between OS and applications (e.g. SL4  SL5 migration), and between grid software –Long term sustainability/maintainability  can we move to something which is more “industry-standard” ? +don’t forget WLCG issues of how to expand to other sites  which may have many other constraints (e.g. may require virtualised WN)  Must address trust issue from the outset 6

CERN IT Department CH-1211 Genève 23 Switzerland t Service consolidation VO Boxes (in the general sense of all user-managed services) –IT runs the OS and hypervisor; user runs the service and application –Clarifies distinction in responsibilities –Simplifies management for VOC – no need to understand system configuration tools –Allows to optimise between heavily used and lightly used services –(eventually) transparent migration between hardware: improve service availability VMs “on demand” (like requesting a web server today) –Request through a web interface –General service for relatively long-lived needs –The user can request a VM from among a set of standard images –E.g.: ETICS multi-platform build and automated testing LCG certification test bed. Today uses a different technology, but will migrate once live checkpointing of images is provided. 7

CERN IT Department CH-1211 Genève 23 Switzerland t 8 CVI: CERN Virtualisation Infrastructure Based on Microsoft’s Virtual Machine Manager –Multiple interfaces available ‘Self-Service’ web interface at SOAP interface Virtual Machine Manager console for Windows clients Integrated with LANdb network database 100 hosts running Hyper-V hypervisor –10 distinct host groups, with delegated administration privileges –‘Quick’ migration of VMs between hosts ~1 minute, session survives migration Images for all supported Windows and Linux versions –Plus PXE boot images

CERN IT Department CH-1211 Genève 23 Switzerland t 9 CVI: some usage statistics Today, CVI provides 340 VMs… – 70% Windows, 30% Linux … for different communities –Self-service portal 230 VMs from 99 distinct users Mixture of development and production machines –Host groups for specific communities Engineering Services: 85 VMs Media streaming: 12 VMs 6 Print servers, 8 Exchange servers … etc

CERN IT Department CH-1211 Genève 23 Switzerland t 10 CVI: work in progress - I Microsoft Integration Components for Linux guests –A.k.a. paravirtualized drivers –Supported on RHEL5 (i386 and x86_64) –Enhanced performance and functionality Boosts I/O performance for VMs on iSCSI storage Support for up to 4 virtual CPUs Better time keeping Support for synthetic mouse –Required RPMs available in “slc5-testing” repositories “{kmod-,}mshvic” RPMs Based on v 2.1 Release Candidate

CERN IT Department CH-1211 Genève 23 Switzerland t 11 CVI: work in progress - II Consolidation of physics servers (IT/PES/PS) –~300 Quattor managed Linux VM’s IT servers and VOBoxes –3 enclosures of 16 blades Each with ISCSI shared storage Failover cluster setup –Allows transparent live migration between hypervisors –Need to gain operational experience Start with some IT services first (e.g. CAProxy, …) Verify that procedures work as planned –Target date for first VOBoxes is ~July 2010 gain experience with non-critical VOBox services first –Next step: Virtual small diskservers (~ 5TB of storage) By accessing iSCSI targets from VM’s

CERN IT Department CH-1211 Genève 23 Switzerland t “LX” Services LXCloud –See presentation of Sebastian Goasguen + Ulrich Schwickerath –Management of a virtual infrastructure with cloud interfaces –Includes the capability to run CernVM images –Scalability test are ongoing –Tests ongoing with both Open Nebula and Platform ISF as potential solutions LXBatch –Standard OS worker node: as VM addresses dependency problem –WN with a full experiment software stack user could choose from among a standard/certified set. These images could e.g. Be built using the experiment build servers. –As the previous case but with the pilot framework embedded –CERNVM images Eventually: LXBatch  LXCloud 12

13 Batch and Cloud computing HEPiX working group on virtualisation –Establishing procedures for trusting VM images Helps the acceptance of sites to run VM images –Details will be presented in a later talk

CERN IT Department CH-1211 Genève 23 Switzerland t IP address space... Issue of IP address space: –Sufficient addresses if allocated carefully, with a good understanding of the deployment plan of virtualised services 14

CERN IT Department CH-1211 Genève 23 Switzerland t Longer term – connect to clouds 15

CERN IT Department CH-1211 Genève 23 Switzerland t Appear as a cloud 16

CERN IT Department CH-1211 Genève 23 Switzerland t Summary Ongoing work in several areas –Will see benefits immediately (e.g. VOBoxes) –Will gain some experience in (e.g. LXCloud) test environment Should be able to satisfy a wide range of resource requests – no longer limited to the model of a single job/cpu Broader scope of WLCG –Address issues of trust, VM management; integration with AA framework etc –Interoperability through cloud interfaces (in both directions) with other “grid” sites as well as public/commercial providers –Can be implemented in parallel with existing grid interfaces 17