Kenichi Kourai Kouta Sannomiya Kyushu Institute of Technology, Japan

Slides:



Advertisements
Similar presentations
Live migration of Virtual Machines Nour Stefan, SCPD.
Advertisements

Virtualization and Cloud Computing. Definition Virtualization is the ability to run multiple operating systems on a single physical system and share the.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
A Fast Rejuvenation Technique for Server Consolidation with Virtual Machines Kenichi Kourai Shigeru Chiba Tokyo Institute of Technology.
Towards High-Availability for IP Telephony using Virtual Machines Devdutt Patnaik, Ashish Bijlani and Vishal K Singh.
Virtualization for Cloud Computing
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2009 Seminar #1 VIRTUALIZATION EVERYWHERE.
Remus: VM Replication Jeff Chase Duke University.
Virtualization. Virtualization  In computing, virtualization is a broad term that refers to the abstraction of computer resources  It is "a technique.
Kenichi Kourai (Kyushu Institute of Technology) Takuya Nagata (Kyushu Institute of Technology) A Secure Framework for Monitoring Operating Systems Using.
Secure Out-of-band Remote Management Using Encrypted Virtual Serial Consoles in IaaS Clouds Kenichi Kourai Tatsuya Kajiwara Kyushu Institute of Technology.
Improving Network I/O Virtualization for Cloud Computing.
Zero-copy Migration for Lightweight Software Rejuvenation of Virtualized Systems Kenichi Kourai Hiroki Ooba Kyushu Institute of Technology.
Dynamic and Secure Application Consolidation with Nested Virtualization and Library OS in Cloud Kouta Sannomiya and Kenichi Kourai (Kyushu Institute of.
02/09/2010 Industrial Project Course (234313) Virtualization-aware database engine Final Presentation Industrial Project Course (234313) Virtualization-aware.
Synchronized Co-migration of Virtual Machines for IDS Offloading in Clouds Kenichi Kourai and Hisato Utsunomiya Kyushu Institute of Technology, Japan.
VMware vSphere Configuration and Management v6
Efficient Live Checkpointing Mechanisms for computation and memory-intensive VMs in a data center Kasidit Chanchio Vasabilab Dept of Computer Science,
Processes and Virtual Memory
Full and Para Virtualization
1 Agility in Virtualized Utility Computing Hangwei Qian, Elliot Miller, Wei Zhang Michael Rabinovich, Craig E. Wills {EECS Department, Case Western Reserve.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
Secure Offloading of Legacy IDSes Using Remote VM Introspection in Semi-trusted IaaS Clouds Kenichi Kourai Kazuki Juda Kyushu Institute of Technology.
Open Source Virtualization Andrey Meganov RHCA, RHCX Consultant / VDEL
XEN – The Art of Virtualisation. So what is Virtualisation? ● Makes use of spare capacity ● Run multiple instances of OSes simultaneously ● Multitasking.
Split Migration of Large Memory Virtual Machines
Virtualization for Cloud Computing
Chapter 6: Securing the Cloud
Processes and threads.
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Performance Comparison of Virtual Machines and Containers with Unikernels Nagashree N Suprabha S Rajat Bansal.
CS 6560: Operating Systems Design
Prepared by: Assistant prof. Aslamzai
Virtual laboratories in cloud infrastructure of educational institutions Evgeniy Pluzhnik, Evgeniy Nikulchev, Moscow Technological Institute
Building a Virtual Infrastructure
Operating Systems (CS 340 D)
Chapter 9: Virtual Memory
Kenichi Kourai Hiroki Ooba Kyushu Institute of Technology, Japan
Elastic Provisioning In Virtual Private Clouds
Introducing To Networking
Cloud-Assisted VR.
Cloud Computing By P.Mahesh
Shohei Miyama Kenichi Kourai Kyushu Institute of Technology, Japan
Group 8 Virtualization of the Cloud
Operating Systems (CS 340 D)
Comparison of the Three CPU Schedulers in Xen
OS Virtualization.
Managing Clouds with VMM
Virtualization Meetup Discussion
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan
I'm Kenichi Kourai from Kyushu Institute of Technology.
Resource Cages: A New Abstraction of the Hypervisor for Performance Isolation Considering IDS Offloading Kenichi Kourai*, Sungho Arai**, Kousuke Nakamura*,
Specialized Cloud Architectures
Cloud Computing Architecture
Prof. Leonardo Mostarda University of Camerino
Shielding applications from an untrusted cloud with Haven
IP Control Gateway (IPCG)
Chapter 4: Threads.
Azure Container Service
CSE 451: Operating Systems Autumn Module 24 Virtual Machine Monitors
System Virtualization
Client/Server Computing and Web Technologies
Virtual Machine Migration for Secure Out-of-band Remote Management in Clouds T.Unoki, S.Futagami, K.Kourai (Kyushu Institute of Technology) OUT-OF-BAND.
Kenichi Kourai Kyushu Institute of Technology
T. Kashiwagi, M. Suetake , K. Kourai (Kyushu Institute of Technology)
Low-cost and Fast Failure Recovery Using In-VM Containers in Clouds
Consistent Offline Update of Suspended Virtual Machines in Clouds
Efficient Migration of Large-memory VMs Using Private Virtual Memory
Presentation transcript:

Kenichi Kourai Kouta Sannomiya Kyushu Institute of Technology, Japan Seamless and Secure Application Consolidation for Optimizing Instance Deployment in Clouds I'm Kenichi Kourai from Kyushu Institute of Technology, Japan. I'm gonna talk about seamless and secure application consolidation for optimizing instance deployment in clouds. This is joint work with my student, who has graduated. Kenichi Kourai Kouta Sannomiya Kyushu Institute of Technology, Japan

Cost Reduction in IaaS Clouds IaaS clouds provide users with instances (VMs) The cost depends on the type and number of instances Need to optimize instance deployment The loads of instances continue to change Used instances should be always sufficient but minimal Infrastructure-as-a-Service clouds provide users with instances, which are usually virtual machines. Users can run their applications in their instances. In this example, two application servers and a database are running in three instances. Users can deploy instances of favorite types and the necessary number of instances. The cost depends on the type and number of instances. To reduce the cost, users need to always optimize instance deployment. This is because the loads of instances continue to change like this. So, used instances should be always sufficient but minimal. In IaaS clouds, the optimization of instance deployment is performed according to resource utilization of instances. instance instance instance app server app server DB IaaS cloud

Most Popular Optimization Scale-in/out adjust the number of instances Scale-out increases it against over-utilization Scale-in decreases it to reduce costs Scale-in cannot further reduce the number when only one instance is deployed At least one instance is required The most popular optimization is scale-in and out. These adjust the number of instances. When an application becomes over-utilized, the user can increase the number of instances by scale-out. In contrast, when the application becomes under-utilized, the user can decrease the number by scale-in to reduce costs. However, when only one instance is deployed for an under-utilized application, the user cannot further reduce the number of instances by scale-in. For example, the load of an intra server may become almost zero during weekends. But at least one instance is required if the server cannot be stopped. instance 1 instance 1 instance 2 scale-out app app app scale-in low load high load

Optimization for One Instance Scale-up/down adjust the amount of resources Switch instance types offline in most of existing clouds Downtime occurs during the switch Users have to restart applications in new instances Cost reduction is limited By the cost of the minimum instance type The optimization for one instance is scale-up and down. These adjust the amount of resources assigned to each instance. When an application becomes over-utilized, the user can increase the number of virtual CPUs by scale-up. In contrast, when the application becomes under-utilized, the user can decrease the amount of resources by scale-down to reduce costs. Most of existing clouds achieve scale-up and down by switching instance types offline. When the user switches instance types, he has to stop applications in the current instance, move their data to a new instance, and restart them in the new instance. During this switch, downtime occurs. In addition, cost reduction is limited by the cost of the minimum instance type. instance instance scale-up app app scale-down low load high load

Optimization by Consolidation Consolidate applications into one instance De-consolidate them under over-utilization Cause downtime during (de-)consolidation When applications are moved between instances Isolation among applications becomes weaker Consolidation across users is impossible For further optimization, users can consolidate multiple applications into one instance. This is called application consolidation. For example, consider a multi-tire application, which consists of an application server and a database. When both are under-utilized, the user can run them in one instance to reduce costs. When the instance becomes over-utilized, the users can de-consolidate these applications to multiple instances again. However, this optimization also causes downtime when applications are moved between instances for consolidation and de-consolidation. In addition, isolation among applications becomes weaker than when using one instance per application because of no isolation provided by VMs. Therefore, application consolidation across multiple users is impossible. instance 1 instance 1 instance 2 consolidate app 1 app 1 app 2 app 2 de-consolidate low load high load

FlexCapsule Enable seamless and secure application consolidation Run each application in an app VM inside an instance Using nested virtualization Migrate applications between instances with negligible downtime Guarantee security between applications by VM's isolation instance 1 instance 2 To enable seamless and secure application consolidation even across different users, we propose FlexCapsule. FlexCapsule runs each application in a lightweight VM called an app VM inside an existing instance. This is achieved using the technology called nested virtualization, which enables VMs to run inside a VM. Nested virtualization imposes extra overhead, but it is reported that the overhead is 6 to 8% for common workloads. By introducing app VMs, FlexCapsule can migrate applications between instances with negligible downtime. Thanks to strong isolation between app VMs, FlexCapsule can guarantee security between applications consolidated into one instance. app VM 1 app VM 2 app 1 app 2

System Architecture Each app VM runs only one process and a library OS An OS server in each instance supports multi-process Each app VM is assigned a private IP address NAPT translates it into the public IP address of an instance A VPN is constructed across app VMs in multiple instances instance 1 instance 2 This figure shows the system architecture of FlexCapsule. Each app VM runs only one application process and a library OS. A library OS is a library that provides OS functions and more lightweight than general-purpose OSes. To support multi-process functions, FlexCapsule runs an OS server in each instance. Each app VM is assigned a private IP address to reduce the cost for using a public IP address. To provide services to the outside, FlexCapsule uses network address port translation. NAPT translates the private IP address of an app VM into the public IP address of an instance, and vice versa. Thanks to this NAPT, multiple app VMs running different applications, for example, a web server and a mail server, can use the same public IP address. Also, FlexCapsule constructs a virtual private network across app VMs in multiple instances to enable seamless VM migration. app VM 1 app VM 2 app VM 3 OS server app 1 app 2 app 3 OS server VPN libOS libOS libOS 192.168.0.1 192.168.0.2 192.168.0.3 131.206.203.1 (public)

Optimization Using App VMs (1/2) Application consolidation Migrate under-utilized app VMs to one instance Stop the source instances & re-assign their IP addresses Application de-consolidation Deploy new instances & migrate over-utilized app VMs Re-assign IP addresses When performing application consolidation, FlexCapsule migrates under-utilized app VMs to one instance. If the source instances of the migration have no app VMs, FlexCapsule stops them and re-assigns their public IP addresses to the destination instance. In this example, instance 2 has no app VM, it is stopped and its IP address is re-assigned to instance 1. Thus, migrated app VMs can be reached using the same public IP address before application consolidation. In contrast, when performing application de-consolidation, FlexCapsule deploys new instances and migrates over-utilized app VMs to them. At the same time, it re-assigns one of the public IP addresses assigned to the source instance to the new instances. instance 1 instance 2 instance 1 app VM 1 app VM 2 app VM 1 app VM 2 131.206.0.1 131.206.0.2 131.206.0.1 131.206.0.2 re-assign

Optimization Using App VMs (2/2) Scale-up/down Deploy a new instance & migrate app VMs Stop the original instance & re-assign its public IP address Scale-out Clone app VMs & migrate them to new instances Allow using the original public IP address by NAPT and VPN To scale up and down an application, as shown in the left figure, FlexCapsule first deploys a new instance of appropriate type. Then, it migrates app VMs inside the original instance to the new one. Finally, it stops the original instance and re-assigns that public IP address to the new one. To scale out an application, as shown in the right figure, FlexCapsule clones app VMs inside the original instance, creates new instances, and migrates the cloned app VMs to them. The new instances are assigned different public IP addresses, but app VMs inside them can use the original public IP address using NAPT and a VPN. original instance new instance original instance new instance clone app VM ' app VM app VM 131.206.0.1 131.206.0.1 131.206.0.2

< FlexCapsule OS A library OS running in an app VM Reduce the memory footprint of an app VM Enable faster VM migration Reduce the overhead of nested virtualization Simplify virtualization using para-virtualization (PV) Need support for VM migration A library OS running in an app VM is called the FlexCapsule OS. Since only necessary functions are linked to an application, the FlexCapsule OS can reduce the memory footprint of an app VM. So, running an app VM per application process doesn't require so much extra memory, compared with using a general-purpose OS. The small memory footprint of an app VM enables faster VM migration because the migration time depends on the memory size of a VM. The FlexCapsule OS reduces the overhead of nested virtualization by using para-virtualization. Since a para-virtualized OS simplifies virtualization, the performance of app VMs can be improved. In compensation of this performance gain, a para-virtualized OS needs to support VM migration by itself. traditional VM app VM < app services app libOS OS

FlexCapsule Server An OS server running in each instance Used for cooperation between app VMs E.g., clone an app VM when it invokes fork Manage NAPT rules to forward packets to app VMs Register a NAPT rule when an app VM invokes listen 131.206.203.1:80 => 192.168.0.2:80 OS server app VM app An OS server running in each instance is called the FlexCapsule server. Since an app VM runs only one application process, it cannot achieve functions for multiple processes. The FlexCapsule server is used for cooperation between app VMs. For example, when an application invokes the fork function, the FlexCapsule server clones the whole app VM. It also manages NAPT rules to forward packets to app VMs. When an app VM invokes the listen function, the FlexCapsule server registers a NAPT rule between public and private IP addresses like this. fork libOS clone NAPT gateway app VM ' app libOS register listen 192.168.0.2

Migration of App VMs Send a suspend request to an app VM FlexCapsule OS suspends PV devices Transfer the states of the app VM to the destination Memory contents and virtual CPU/device states FlexCapsule OS resumes PV devices source instance destination instance When an app VM is migrated, the FlexCapsule server sends a suspend request to the FlexCapsule OS in an app VM. The FlexCapsule OS first suspends para-virtual devices such as network devices. Specifically, it safely disconnects device drivers from such virtual devices. After that, the FlexCapsule server transfers the states of the app VM to the destination instance. The states include memory contents and virtual CPU and device states. When the migration of the app VM is completed, the FlexCapsule OS resumes para-virtual devices. app VM app VM ' VM states suspend app OS server OS server libOS read write

Migration-transparent Networking Apply the same NAPT rules after VM migration Use NAPT rules in the source instance Need to transfer no NAPT rules to the destination Deliver packets inside a VPN across instances Seamlessly forward packets to migrated app VMs source instance destination instance FlexCapsule provides migration-transparent networking by using a VPN across instances. Even after an app VM has been migrated, FlexCapsule can apply the same NAPT rules to the app VM. Since it continues to use NAPT rules in the source instance, it doesn't need to transfer NAPT rules to the destination instance. When the source instance receives packets to the migrated app VM, FlexCapsule delivers them inside a VPN including the destination instance. As a result, it can seamlessly forward packets to the migrated app VM. 192.168.0.1 NAPT gateway app VM packets VPN 131.206.203.1

Fork of App VMs Duplicate the states of a parent app VM Efficiently copy memory contents in the hypervisor Copy virtual CPU/device states Share the disk between parent and child app VMs Seamlessly insert copy-on-write disks to both app VMs instance COW disk disk When an application invokes the fork function, the FlexCapsule OS sends a fork request to the FlexCapsule server. As an alternative method, the user can send the request to the server. The FlexCapsule server creates a child app VM and duplicates the states of the parent app VM to the child one. To perform this duplication efficiently, it invokes the underlying hypervisor and copies memory contents. The hypervisor also copies virtual CPU and device states. In addition, the FlexCapsule server shares the disk between the parent and child app VMs. This sharing is done in a copy-on-write manner and modifications are reflected only to one of the app VMs. To achieve this, the FlexCapsule server seamlessly inserts copy-on-write disks to both app VMs. parent app VM child app VM OS server fork hypervisor VM states

Load-balancing with Fork Create a pool of app VMs Clone an app VM in advance or on demand All the app VMs listen on the same port Forward packets to one app VM in round-robin The pool can be across instances by using a VPN instance 1 instance 2 FlexCapsule enables load balancing with fork. For this purpose, it creates a pool of app VMs. In advance or on demand, it repeatedly clones an app VM . If the original app VM listens on a port, all the cloned app VMs also listen on the same port. In this example, all the app VMs listen on port 80. When packets are sent to the listening port, FlexCapsule forwards them to one app VM in the pool. It selects such an app VM in a round-robin fashion. For example, the selection is done in the order of app VM, app VM prime, and app VM double prime. If some of the app VMs are migrated to other instances, the pool can be across instances by using a VPN. 192.168.0.1 192.168.0.2 192.168.0.3 packets app VM app VM ' app VM '' NAPT gateway VPN :80 :80 :80 131.206.203.1

Implementation We have implemented FlexCapsule in Xen 4.2 VMs run Xen using nested virtualization DomU is an app VM Dom0 runs FlexCapsule server FlexCapsule OS is based on OSV 0.21 Fully virtualized but use para-virtual devices Enable running many existing applications With no or slight modifications We have implemented FlexCapsule in Xen. VMs run Xen using nested virtualization. Its DomU is an app VM and Dom0 runs the FlexCapsule server. The FlexCapsule OS is based on OSv. OSv is fully virtualized but uses para-virtual devices. It can run many existing applications with no or slight modification.

Experiments We conducted experiments to examine Effectiveness of optimizing instance deployment using FlexCapsule Application de-consolidation Scale-out and scale-up Performance of FlexCapsule Migration of an app VM Fork of an app VM Host CPU: Intel Xeon E3-1290v2 Memory: 8 GB Instance (VM) vCPU: 1-4 Memory: 2 GB App VM vCPU: 1 Memory: 4-256 MB To confirm the effectiveness of FlexCapsule, we conducted several experiments. One of our aims is to show the effectiveness of optimizing instance deployment using FlexCapsule. We examined the performance improvement by application de-consolidation, scale-out, and scale-up. The other aim is to examine the performance of FlexCapsule. We measured the performance of migration and fork of an app VM.

Application De-consolidation We compared application performance under consolidation and de-consolidation Ran three app VMs in one instance and three instances De-consolidation could improve application performance 1.9x - 2.7x First, we compared application performance under consolidation and de-consolidation. We used three app VMs, which ran the lighttpd web server, the memcached caching system, and the Redis in-memory database. When consolidating these three app VMs, we ran them inside one instance. When de-consolidating them, we used three instances. This figure shows relative performance based on the throughput of each application when we consolidated the app VMs. De-consolidation could improve application performance by a factor of 1.9 to 2.7. This shows the effectiveness of application de-consolidation using app VMs.

Scale-out and Scale-up We increased the number of instances for scale- out The total throughput of lighttpd increased We increased the number of vCPUs for scale-up The throughput of lighttpd increased The performance of network processing was improved Next, we investigated the effectiveness of scale-out and scale-up using app VMs. For scale-out, we ran one app VM inside one instance and increased the number of instances. The left figure shows the total throughput of the lighttpd web servers. The total throughput in two instances became twice of that in one instance. For scale-up, we increased the number of virtual CPUs assigned to the instance. As shown in the right figure, the throughput of lighttpd increased. lighttpd is single-threaded, so using multiple virtual CPUs don't increase the performance. But the performance of network processing in a virtual network device was improved by increasing the number of virtual CPUs.

Migration Performance The downtime of an app VM was shorter FlexCapsule OS suspended fewer virtual devices The migration time of an app VM was 1.5x shorter In the minimum memory size We migrated an app VM and measured the downtime and migration time. For comparison, we also migrated a regular VM running Linux. The left figure shows the downtime. The downtime did almost not depend on the memory size of the VM. That of an app VM was shorter than that of the regular VM. This is because the FlexCapsule OS suspended fewer virtual devices. The right figure shows the migration time. The time was proportional to the memory size of the VM. The app VM could run with the smaller amount of memory, thanks to the smaller memory footprint of a library OS. So, in the minimum memory size, the migration time was 1.5 times shorter.

Fork Performance We measured the execution time of fork Slightly proportional to the memory size The bottleneck was restoring device states We compared with fork using standard commands Our implementation was 18x faster Finally, we measured the execution time of the fork function in an app VM. The left figure shows the fork time in FlexCapsule. The time was slightly proportional to the memory size of the app VM. The bottleneck was restoring virtual device states. We compared with fork implementation using Xen's standard save and restore commands. Our implementation was 18 times faster, thanks to native support by the hypervisor.

Related Work Picocenter [Zhang+ EuroSys'16] Run applications in containers inside instances Designed for long-lived, mostly idle applications Resource as a Service [Ben-Yehuda+ HotCloud'12] Dynamically change the amount of resources of instances Cannot decrease the number of vCPUs to less than one Graphene [Tsai+ EuroSys'14] Linux-compatible library OS supporting multi-process Process-level isolation is weaker than VM-level one Picocenter runs applications in containers inside instances. To optimize instance deployment, it swaps applications to and from cloud storage. Unlike FlexCapsule, it is designed for long-lived, mostly idle applications, which can be stopped. Resource-as-a-Service clouds can dynamically change the amount of resources of instances. For example, VMware vCloud Air supports seamless and flexible scale-up and down. However, it cannot decrease the number of virtual CPUs to less than one. Graphene is a Linux-compatible library OS supporting multi-process. It achieves process fork and migration. But its process-level isolation is weaker than VM-level one provided by FlexCapsule.

Conclusion FlexCapsule Future Work Optimize instance deployment Run each application in an app VM inside an instance Using nested virtualization and a library OS Enable seamless and secure application consolidation Using VM migration and strong isolation among VMs Future Work Enable various applications to run in app VMs Reduce the overhead of nested virtualization In conclusion, we proposed FlexCapsule for optimizing instance deployment. FlexCapsule runs each application in an app VM inside an instance. It uses nested virtualization to run app VMs and a library OS to reduce its overhead. Using VM migration and strong isolation among VMs, FlexCapsule enables seamless and secure application consolidation across users. One of our future work is to enable various applications to run in app VMs. For that, we need to advance multi-process support such as inter-process communication. Another direction is to reduce the overhead of nested virtualization. For example, we need to develop para-virtualized OSv.