Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan

Similar presentations


Presentation on theme: "Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan"— Presentation transcript:

1 Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan
The Continuity of Out-of-band Remote Management Across Virtual Machine Migration in Clouds I’m Sho Kawahara from Kyushu Institute of Technology。 Today, I’d like to talk about “the Continuity of Out-of-band Remote Management Across Virtual Machine Migration in Clouds” 0:20 Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan

2 Remote VM Management in IaaS
In-band remote management is usually used A VNC server runs in a user VM A user connects to the server via the network However, a user cannot access his VM when He fails the configuration of the firewall or network The system in the VM does not work normally VNC Infrastructure as a Service provides virtual machines hosted in data centers。 Its users can set up the systems in the provided VMs called user VMs。 They manage the VMs through remote management software such as VNC。 Usually, a user connects to a VNC server running in a user VM using a VNC client。 This is called in-band remote management because the user accesses a user VM using the functionalities provided inside the VM。 However, this in-band remote management is not powerful enough。 When the user just fails the configuration of the network or firewall in a user VM, he cannot access the VM at all。 Additionally, when the system inside a VM doesn't work normally, the user cannot obtain any information via VNC。 1:20 60s VM VM VM VM VNC Server Management VNC Client

3 Out-of-band Remote Management
Access a user VM via a VNC server in the management VM Directly access virtual devices for a user VM Virtual keyboard, mouse, and video card Not rely on the network or system in the user VM In-band remote management Management VM User VM VNC Client To allow the user to access his system even on failures inside a user VM, IaaS often provides out-of-band remote management via a special VM called the management VM。 Unlike in-band remote management, a VNC server runs in the management VM, not in a user VM, (クリック) and directly accesses virtual devices created for a user VM。 This out-of-band remote management does not rely on the network or system in a user VM。 The user can access a user VM as if he locally logged in the VM。 (クリック) Even if the network of a user VM becomes unreachable, or if the system inside a user VM doesn't work normally, the user can continue to manage the VM。 2:20 VNC Server Network Failure Virtual Devices Out-of-band Remote Management

4 Discontinued on VM Migration
VM management is discontinued on VM migration A VNC server at a source host is terminated It loses the access to the removed virtual devices A user has to manually reconnect to the destination After identifying the reason and looking for the destination host VNC Client 1 2 Migration However, out-of-band remote management is discontinued on VM migration。 VM migration is a technique for moving a VM to another host。 This is used for load balancing and power saving。 When a user VM is migrated, (クリック) its virtual devices in the management VM are removed。 At the same time, a VNC server in the management VM is terminated、 because it loses the access to the removed virtual devices。 As a result, a VNC client is disconnected from the VNC server。 To restart remote management, a big burden is imposed on the user。 (クリック) First, a user has to identify the reason why a VNC client is disconnected。 The possible cause is not only VM migration but also network failures or system failures in a user VM or the management VM。 If the disconnection is due to VM migration, a user has to look for the destination host。 Then he has to reconnect to a VNC server at that host。 3:30 Management VM User VM Management VM VNC Server VNC Server Virtual Devices Virtual Devices

5 Data Loss on VM Migration
Keyboard and mouse inputs can be lost on VM migration In-flight data is dropped and is not retransmitted Pending data in a VNC server is lost by its termination Pending data in virtual devices is lost by their removals VNC Client 1 2 Migration Processing Worse than that, keyboard and mouse inputs can be lost when a user VM is migrated。 (クリック) If input data has been sent from a VNC client but has not yet been received by a VNC server, (クリック) such in-flight network packets are dropped。 The TCP connection between VNC client and server is terminated, so dropped packets aren't retransmitted at the network level。 If input data received by a VNC server has not yet been sent to a virtual device, (クリック) it is lost by the termination of the VNC server。 If input data received by a virtual device has not been sent to a user VM, (クリック) it is also lost by the removal of the virtual device。 When keyboard inputs are lost, a user has to type them again after the reconnection。 But it may be difficult to even notice the loss of inputs。 4:40 Management VM User VM ManagementVM ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD VNC Server ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD VNC Server ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD Virtual Devices ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD Virtual Devices Sending Processing

6 Maintained at the network level
D-MORE Continue out-of-band remote management across VM migration Provide a privileged and migratable VM (DomR) A VNC server and virtual devices run in DomR Synchronously co-migrate DomR and its target VM Maintain the connections between a VNC client, DomR, and its target VM 1 2 Migration To solve these problems, we propose D-MORE for continuing out-of-band remote management across the migration of User VMs。 D-MORE provides a privileged and migratable VM called DomR for remote management of a user VM。 (クリック) DomR runs only a VNC server and virtual devices for its target VM。 A VNC client connects to a VNC server in DomR and accesses a user VM through virtual devices in DomR。 (クリック) When a user VM is migrated, D-MORE synchronously co-migrates the corresponding DomR to the same destination host。 Across the migration, D-MORE transparently maintains all the connections between a VNC client, DomR, and its target VM。 5:30 DomR VNC Server Virtual Devices User VM VNC Client Maintained at the network level VNC Server Management VM Maintained by D-MORE

7 Data Loss Prevention No input data is lost during co-migration
Pending data in a VNC server and virtual devices Migrated as a part of DomR In-flight data sent by a VNC client It is retransmitted because the TCP connection to DomR is maintained Processing VNC Client DomR VNC Server Virtual Devices Target VM Using D-MORE, no input data for out-of-band remote management is lost during co-migration。 (クリック) First, pending data in a VNC server and virtual devices is preserved。 A VNC server and virtual devices are migrated as a part of DomR, so they can continue to process such pending data at a destination host。 Second, in-flight network packets from a VNC client to a VNC server are retransmitted by TCP although they may be dropped temporarily by migrating DomR。 D-MORE preserves the TCP connection between VNC client and server by running a VNC server in DomR。 6:20 ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD Sending ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD Processing

8 ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD
DomR A VM with privileges for accessing only its target VM Establish shared memory By mapping target VM's memory Exchange input and output data Establish interrupt channels By interception Notify each other of new data in the shared memory Virtual Devices VNC Server DomR Target VM Interrupt Channels Shared Memory ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD DomR has privileges necessary for running virtual devices。 Traditionally, virtual devices could run only in the management VM because they need to access a user VM。 First, DomR has a privilege for establishing shared memory with its target VM。 Specifically, DomR maps target VM's memory and uses it as shared memory。 Using shared memory, (クリック) virtual devices in DomR exchange data with the target VM。 Second, DomR has a privilege for establishing interrupt channels with its target VM。 DomR intercepts interrupt channels that the target VM is establishing with the management VM。 Via interrupt channels, (クリック) virtual devices in DomR and the target VM notify each other of the existence of new data in the shared memory。 7:20 interrupt

9 Migration Process of a Single VM
Transfer VM's memory to a destination host with the VM running Transfer the entire memory Repeat to transfer dirty memory until small enough Stop the VM and transfer the remains Still dirty memory and CPU state Restart the VM at the destination Before explaining co-migration of DomR and its target VM, I'll explain the normal migration process of a single VM。 After VM migration is started, the migration manager transfers VM’s memory from a source host to a destination host with the VM running。 At first, it creates a new empty VM at a destination host。 Then it transfers the entire memory of the original VM to the new one。 VM's memory is modified during migration, so the migration manager repeats to transfer only dirty memory again until the size becomes small enough。 At the final stage of the migration process, the migration manager stops the original VM and transfers the remaining dirty memory and CPU state。 Then it terminates the original VM。 At the destination host, the migration manager restarts the new VM。 8:10 stop terminate source host original VM memory transfer destination host new VM create restart

10 Co-migration Synchronize two migration processes of DomR and its target VM Stop the two VMs simultaneously to reduce downtime Restore shared memory after reconstructing target VM's memory Save/Restore interrupt channels while stopping the VMs save channels On the other hand, co-migration synchronizes two migration processes of DomR and its target VM。 First, two migration managers for DomR and its target VM transfer VM's memory at the same time。 Then, each migration manager repeats to transfer dirty memory until the other can enter the final stage。 Thanks to this synchronization, the downtime is reduced。 After target VM's memory has been reconstructed, the migration manager for DomR restores the shared memory with the target VM。 After both VMs have stopped, the migration manager for DomR saves the state of interrupt channels。 Then these interrupt channels are restored before both VMs are restarted。 I’ll explain these details next。 9:10 stop terminate source host DomR Target VM destination host DomR Target VM create restore share mem restore channels restart

11 Restoring Shared Memory
Save the mapping state of target VM's memory Inspect the page tables of DomR Set a monitor bit in a PTE if target VM's page is mapped Meaning shared memory Remap target VM's memory at the destination Inspect received page tables Restore the mapping state if a monitor bit is set in a PTE ※con si der Traditionally, migrating a VM mapping other VM's memory is not considered。 To restore shared memory at a destination host, the migration manager for DomR saves the mapping state of target VM‘s memory at a source host。 First, it inspects the page tables of DomR。 If target VM's memory page is mapped to DomR, the migration manager sets a monitor bit in the corresponding page table entry。 (クリック) At a destination host, the migration manager inspects the received page tables。 If a monitor bit is set in a page table entry, the migration manager remaps the corresponding memory page of its target VM to DomR。 10:00 DomR Target VM map (share) remap (share) page table source destination

12 Restoring Interrupt Channels
Save the state of interrupt channels Obtain a list of the interrupt channels Only between DomR and its target VM Transfer pairs of ports used for the interrupt channels Re-establish interrupt channels at the destination Guarantee that the same port numbers are used We have modified the resume operation in the OS Traditionally, all of the interrupt channels are closed on VM migration。 To restore interrupt channels at a destination host, the migration manager for DomR obtains a list of the interrupt channels established between DomR and its target VM。 Then it transfers pairs of ports used for the interrupt channels。 (クリック) At a destination host, the migration manager re-establishes interrupt channels between DomR and its target VM so that these VMs use the same pairs of ports as at a source host。 To reuse re-established interrupt channels in the operating systems of these VMs, we have modified the resume operation for virtual interrupts。 10:50 DomR Target VM Interrupt Channels port:2 port:10 Interrupt Channels port:2 port:10 source destination

13 Data Transfer in Shared Memory
DomR's writes to shared memory are not detected Only target VM's writes are detectable Data written by DomR may not be transferred D-MORE always considers shared memory as dirty Guarantee that updated shared memory is transferred ※ <メ>カニズム During VM migration, the migration manager repeats to transfer only modified memory to a destination host。 To do this, it obtains information on dirty memory。 But DomR's writes to shared memory are not detectable。 (クリック) Only memory owner's writes are detectable。 The owner of shared memory is a target VM。 So after DomR writes input data in shared memory、 the data may not be transferred。 This may cause a data loss in D-MORE。 To prevent such data loss、 D-MORE always considers shared memory as dirty。 (クリック) Thereby it is guaranteed that the migration manager for the target VM transfers shared memory modified by DomR。 11:50 DomR Target VM write write share dirty detect

14 Experiments We have implemented D-MORE in Xen and Linux
We conducted several experiments for D-MORE Data loss on co-migration Performance of remote management and co-migration Server CPU Intel Xeon E GHz Memory DomR 128MB User VM 2GB NIC Gigabit Ethernet VMM Xen 4.3.2 OS Linux Client CPU Intel Xeon E GHz Memory 8GB NIC Gigabit Ethernet VNC Client TigerVNC OS Windows 7 ※ con ducted ※ e xa mine ※ im plement We have implemented D-MORE in Xen and Linux 。 Then we conducted several experiments to show the effectiveness of D-MORE。 One aim is to confirm the data loss prevention on co-migration。 The other aim is to examine the performance of remote management and co-migration。 We ran one DomR and one target VM in a server PC and TigerVNC in a client PC。 12:30

15 Data Loss on Co-migration
We examined if D-MORE could prevent data loss during VM migration Sent a key every 50 ms from a VNC client to a VNC server Counted the number of lost keys in a virtual keyboard Original Xen 1.4 keys were lost D-MORE No keys were lost across the co-migration The number of TCP retransmission was 8.5 We examined if D-MORE could prevent data loss in out-of-band remote management during VM migration。 We sent a key every 50 ms from a VNC client to a VNC server and monitored the data received by a virtual keyboard。 Then we counted the number of lost keys。 As a result, in the original Xen, 1.4 keys were lost in the virtual keyboard of the management VM on average。 On the other hand, in D-MORE, no keys were lost in the virtual keyboard of DomR across the co-migration。 During the migration of DomR, the number of TCP retransmission was 8.5 on average。 These results show D-MORE prevented data loss successfully。 13:30

16 Overhead of Using DomR We measured the response time in out-of-band remote management Keyboard input We sent a keyboard event and received its remote echo Full-screen update We ran a screen saver that redrew the full screen The overhead was negligible Increased by 221 μs Screen update Increased by 9 μs ※ comparison = kəmpˈærəsn コン<パ>ラスン Next, we examined the overhead of out-of-band remote management using DomR。 First, we measured the response time of a keyboard input。 The response time was the time from when a VNC client sent a keyboard event until it received a screen update caused by its remote echo。 For comparison, we measured the response time in the original Xen, where a VNC server ran in the management VM。 The left bars in this figure show the average response time in the original Xen and D-MORE。 From these results, the increase of the response time was 221 us and negligible。 Next, we examined the response time of a full-screen update when we updated the full screen of the target VM。 For a full-screen update, we ran a screen saver that redrew the full screen frequently in the target VM。 The difference of the response times between the original Xen and D-MORE was 9 us and negligible。 14:40

17 Co-migration Time We measured the time needed for co-migration of DomR and its target VM With and without remote management Independent migration of two normal VMs (baseline) Proportional to the memory size Without inputs Increased by 1.7 s With inputs Increased by 15 s Due to dirty memory ※ comparison = kəmpˈærəsn コン<パ>ラスン Next, we measured the time needed for co-migration of DomR and its target VM。 To examine the impact of out-of-band remote management, we measured the co-migration time both when a VNC client did not connect to a VNC server and when it sent a key every 50 ms。 For comparison, we migrated two normal VMs in parallel without synchronization, using the original Xen。 This figure shows the co-migration time for various memory size of the target VM。 The co-migration time was proportional to the memory size。 Compared with independent migration, When we didn’t perform remote management, the co-migration time in D-MORE increased by only 1.7 seconds at maximum when we didn't perform remote management。 When we performed remote management during co-migration, the co-migration time increased by 15 seconds at maximum。 The reason is that a larger amount of memory became dirty。 15:50

18 Downtime during Co-migration
The downtime of DomR and its target VM We measured the time in which a VM was not running The downtime of DomR was 3x longer The user-perceived downtime at a VNC client We sent a key every 50 ms and measured the response time The downtime was acceptable We measured the downtime of DomR and its target VM during co-migration。 The average downtime was shown by the bottom two lines in this figure。 The downtime of DomR was 3 times longer than the target VM due to synchronization in co-migration。 Next, we measured the user-perceived downtime at a VNC client。 We sent a key every 50 ms and measured the response time。 Then we considered a long response time at the final stage of co-migration as the user-perceived downtime。 The variance was very large but the average user-perceived downtime was 827 ms at maximum。 We believe that this downtime is acceptable for the purpose of remote management。 16:50 ※ va riance =<バ>リアンス

19 Performance Degradation by Co-migration
We examined the impact by co-migration The response time increased by 5.4 ms Lasted for 30s The frame rate decreased by 0.4 fps Response Time (Keyboard) Frame rate (Screen) Finally, we examined the performance degradation by co-migration。 To examine the impact on the response time, we sent a key every 50 ms during co-migration。 The left figure shows the result。 After we started co-migration, the response time increased by 5.4 ms on average。 This performance degradation lasted for 30 seconds。 Next, to examine the impact on the frame rate of screen updates, we ran the screen saver in the target VM。 As shown in the right figure, the frame rate decreased by about 0.4 frame per second on average after the co-migration was started。 The degradation of the frame rate lasted for 30 seconds。 17:40

20 Related Work VNC Proxy [CloudStack, etc.] SPICE [Red Hat, Inc. ‘09]
Transparently switch a VNC server on VM migration Pending data in a VNC server is lost SPICE [Red Hat, Inc. ‘09] Support VM migration at the protocol level VM management depends on specific software VM Coupler [Kourai et al. ‘13] Run intrusion detection systems in a dedicated VM (DomM) Enable co-migration of two VMs for secure monitoring The synchronization is different from D-MORE To continue out-of-band remote management across VM migration, there are two approaches different from D-MORE。 The first approach is to use a VNC proxy。 To manage a user VM, a VNC client accesses a VNC server in the management VM via a VNC proxy。 When a user VM is migrated, a VNC proxy can transparently switch the connection。 But pending data in a VNC server and virtual devices is lost at a source host。 ** The second approach is to use SPICE, which is remote management software developed for KVM。 SPICE supports VM migration at the protocol level。 When a user VM is migrated, a SPICE client automatically switches the connection and prevents data loss。 One disadvantage is that VM management depends on specific remote management software。 D-MORE can use any remote management software such as SSH。 ** VMCoupler runs intrusion detection systems in a dedicated VM, named DomM, // and monitors the target VM。 Similar to D-MORE, VMCoupler can synchronously co-migrate DomM and its target VM。 But, unlike D-MORE, it synchronizes two migration processes for secure monitoring。 Therefore the synchronization in co-migration is largely different between VMCoupler and D-MORE。 19:30 1m50

21 Conclusion We proposed D-MORE for continuing out-of-band remote management across VM migration Run a VNC server and virtual devices in DomR Synchronously co-migrate DomR with its target VM Prevent data loss Reduce downtime Future work Apply D-MORE to other remote management software such as SSH Support fully virtualized guest OSes such as Windows In conclusion, we proposed D-MORE for continuing out-of-band remote management across VM migration。 D-MORE runs a VNC server and virtual devices in DomR and synchronously co-migrates DomR with its target VM to prevent data loss and reduce downtime。 One of our future work is to apply D-MORE to other remote management software such as SSH。 We’re working on this now。 Another direction is to support fully virtualized guest operating systems such as Windows in D-MORE。 That’s all for my presentation。 Thank you。 20:10


Download ppt "Sho Kawahara and Kenichi Kourai Kyushu Institute of Technology, Japan"

Similar presentations


Ads by Google