Ceph vs Local Storage for Virtual Machine 26 th March 2015 HEPiX Spring 2015, Oxford Alexander Dibbo George Ryall, Ian Collier, Andrew Lahiff, Frazer Barnsley.

Slides:



Advertisements
Similar presentations
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Advertisements

University of St Andrews School of Computer Science Experiences with a Private Cloud St Andrews Cloud Computing co-laboratory James W. Smith Ali Khajeh-Hosseini.
Ed Duguid with subject: MACE Cloud
University of Notre Dame
Lecture 12: Cloud Computing-C Amazon Web Service Tutorial.
© 2014 VMware Inc. All rights reserved. Characterizing Cloud Management Performance Adarsh Jagadeeshwaran CMG INDIA CONFERENCE, December 12, 2014.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Cloud & Virtualisation Update at the RAL Tier 1 Ian Collier Andrew Lahiff STFC RAL Tier 1 HEPiX, Lincoln, NEBRASKA, 17 th October 2014.
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice In search of a virtual yardstick:
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Introduction to DoC Private Cloud
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
VIRTUALISATION OF HADOOP CLUSTERS Dr G Sudha Sadasivam Assistant Professor Department of CSE PSGCT.
1 Virtualization Services. 2 Cloud Hosting –Shared Virtual Servers –Dedicated Servers Managed Server Options Multiple Access Methods –EarthLink Business.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Jun 29, 20101/25 Storage Evaluation on FG, FC, and GPCF Jun 29, 2010 Gabriele Garzoglio Computing Division, Fermilab Overview Introduction Lustre Evaluation:
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
System Center 2012 Setup The components of system center App Controller Data Protection Manager Operations Manager Orchestrator Service.
1 The Virtual Reality Virtualization both inside and outside of the cloud Mike Furgal Director – Managed Database Services BravePoint.
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
Components of Windows Azure - more detail. Windows Azure Components Windows Azure PaaS ApplicationsWindows Azure Service Model Runtimes.NET 3.5/4, ASP.NET,
RAL Site Report HEPiX Fall 2013, Ann Arbor, MI 28 Oct – 1 Nov Martin Bly, STFC-RAL.
On Premises Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime You manage Infrastructure (as a Service) Storage.
A Performance Evaluation of Azure and Nimbus Clouds for Scientific Applications Radu Tudoran KerData Team Inria Rennes ENS Cachan 10 April 2012 Joint work.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Creating an EC2 Provisioning Module for VCL Cameron Mann & Everett Toews.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
Ceph Storage in OpenStack Part 2 openstack-ch,
Planning and Designing Server Virtualisation.
Improving Network I/O Virtualization for Cloud Computing.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Grid Developers’ use of FermiCloud (to be integrated with master slides)
Cloud services at RAL, an Update 26 th March 2015 Spring HEPiX, Oxford George Ryall, Frazer Barnsley, Ian Collier, Alex Dibbo, Andrew Lahiff V2.1.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
 Securing and Administering Virtual Machines George Manley and Yang He.
Interoperability and Image Analysis KC Stegbauer.
Virtualisation & Cloud Computing at RAL Ian Collier- RAL Tier 1 HEPiX Prague 25 April 2012.
RAL Site Report HEPiX FAll 2014 Lincoln, Nebraska October 2014 Martin Bly, STFC-RAL.
Windows Azure Virtual Machines Anton Boyko. A Continuous Offering From Private to Public Cloud.
NTU Cloud 2010/05/30. System Diagram Architecture Gluster File System – Provide a distributed shared file system for migration NFS – A Prototype Image.
Arne Wiebalck -- VM Performance: I/O
Feifei Chen Swinburne University of Technology Melbourne, Australia
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
Monitoring with InfluxDB & Grafana
RALPP Site Report HEP Sys Man, 11 th May 2012 Rob Harper.
Ian Collier, STFC, Romain Wartel, CERN Maintaining Traceability in an Evolving Distributed Computing Environment Introduction Security.
Tier 1 Experience Provisioning Virtualized Worker Nodes on Demand Ian Collier, Andrew Lahiff UK Tier 1 Centre, RAL ISGC 2014.
IMPROVEMENT OF COMPUTATIONAL ABILITIES IN COMPUTING ENVIRONMENTS WITH VIRTUALIZATION TECHNOLOGIES Abstract We illustrates the ways to improve abilities.
Authentication, Authorization, and Contextualization in FermiCloud S. Timm, D. Yocum, F. Lowe, K. Chadwick, G. Garzoglio, D. Strain, D. Dykstra, T. Hesselroth.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Experience integrating a production private cloud in a Tier 1 Grid site Ian Collier Andrew Lahiff, George Ryall STFC RAL Tier 1 ISGC 2015 – March 20 th.
CERN IT Department CH-1211 Genève 23 Switzerland The CERN internal Cloud Sebastien Goasguen, Belmiro Rodrigues Moreira, Ewan Roche, Ulrich.
NEWS LAB 薛智文 嵌入式系統暨無線網路實驗室
Section 8 Monitoring SES 3
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
SCD Cloud at STFC By Alexander Dibbo.
HPEiX Spring RAL Site Report
Cassandra on epam cloud
Deploy OpenStack with Ubuntu Autopilot
OpenStack Ani Bicaku 18/04/ © (SG)² Konsortium.
Xen Summit Spring 2007 Platform Virtualization with XenEnterprise
Public vs Private Cloud Usage Costs:
Virtualization Layer Virtual Hardware Virtual Networking
Implementation of a small-scale desktop grid computing infrastructure in a commercial domain    
Client/Server Computing and Web Technologies
Presentation transcript:

Ceph vs Local Storage for Virtual Machine 26 th March 2015 HEPiX Spring 2015, Oxford Alexander Dibbo George Ryall, Ian Collier, Andrew Lahiff, Frazer Barnsley

Background As presented earlier, we have developed a cloud based on OpenNebula and backed by Ceph storage We have –28 hypervisors (32 threads, 128GB RAM, 2TB disk, 10GB networking) –30 storage nodes (8 threads, 8GB RAM, 8 x 4TB disks, 10GB front and backend networks) –OpenNebula Headnode is virtual –Ceph monitors are virtual

What Are We Testing? The performance of Virtual Machines on Local Storage (hypervisor local disk) vs Ceph RBD storage How quick machines are to deploy and to become useable in different configurations How quickly management tasks can be performed (i.e. live migration)

What are we trying to find out? The performance characteristics of virtual machines on each type of storage How agile we can be with machines on each type of storage

Test Setup Virtual Machine – Our SL6 image, 1 CPU, 4GB of RAM, 10GB OS Disk and a 50GB Sparse Disk for Data 4 Different Configurations –OS on Ceph, Data on Ceph –OS Local, Data Local, –OS on Ceph, Data Local –OS Local, Data on Ceph 3 VMs of each configuration spread across the cloud for a total of 12 VMs The cloud is very lightly used as it is still being commissioned

How Are We Testing? Pending to Running ( Time to deploy to Hypervisor) Running to useable (How long to boot) Pending to useable (Total of the above) –This is what users care about Live migration time IOZone Single Thread Tests (Read, ReRead, Write, ReWrite) –6GB on OS Disk –24GB on Data Disk –3 VMs of each configuration throughout our cloud. 20 instance s of each test per VM

How Are We Testing? IOZone Aggregate Test – 12 Threads equal split mixed Read and Write (Read, ReRead, Write, ReWrite) –0.5 GB per thread on the OS disk – 6GB total –2 GB per thread on the Data disk – 24GB total –3 VMs of each configuration throughout our cloud. 20 instance s of each test per VM (240 data points)

Results Launch Tests

Results Launch Tests (Log Scaled)

Results IOZone Single Thread Tests Read/ReRead

Results IOZone Single Thread Tests Read/ReRead (Log Scaled)

Results IOZone Single Thread Tests Write/ReWrite

Results IOZone Single Thread Tests Write/ReWrite (Log Scaled)

Results IOZone Multi Thread Tests Read/ReRead

Results IOZone Multi Thread Tests Read/ReRead (Log Scaled)

Results IOZone Multi Thread Tests Write/ReWrite

Results IOZone Multi Thread Tests Write/ReWrite (Log Scaled)

Conclusions Local disk wins for single threaded read operations (such as booting the virtual machine) Ceph wins for single threaded write operations (large sequential writes) Ceph wins for both reads and writes for multi threaded operations

Why is this? Local disks have a maximum throughput which is very limited Due to the way RBD stripes data across the Ceph cluster the bottleneck here is the NIC on the hypervisor –In this case the NICs are 10Gb so to get equivalent performance would require a large RAID set in each hypervisor.

Further Work Test when the cloud is under more load Test using micro kernel VMs such as the µCernVMµCernVM Test larger data sets

A Minor Issue During the testing run we noticed that one of the storage nodes had dropped out of use. After some investigation we found this -> The testing, and the cloud as a whole, didn’t skip a beat

Any Questions?