Download presentation
Presentation is loading. Please wait.
Published byBrice Wilkinson Modified over 8 years ago
1
Virtual cloud R98922135 陳昌毅 R98944033 顏昭恩 R98922150 黃伯淳 2010/06/03
2
Outline Goal Introduction – Full virtualization – Para-virtualization VMware XEN System Diagram Current Progress Future Work Q&A
3
Goal Construct cloud platform that provides VM as a service Motivation – A Private cloud platform – Common cloud platform (Eucalyptus, ONE) doesn’t provide migration Challenge – A convenient environment to migrate VM Access image through NFS is extremely slow
4
XEN – A hypervisor – Virtualization Eucalyptus – Controller system – NO migration mechanism OpenNEbula Related work
5
We use a distributed share file system to host VM image. – VM can migrate without a central storage. Construct VMs as multiple prototypes according to their functionalities. Innovations
6
Why Virtualization To increase the utilization of costly hardware resources. Easy to management. Portability.
7
XEN Para-virtualization Full virtualization (Hardware assisted) Xend (domain-0)
8
System Diagram
9
Share File System ProsCons NFSEasy to manageIO bottleneck GlustreDistribute file systemNeed to modify kernel GlusterUser space distribute file system P2P Modified Glustre kernel is unstable to work with XEN
10
Migration Why we need live migration? We want to move VM without interrupting VM service. Products: – Xen : live migration – VMware:VMotion Two important consideration: 1. Downtime 2. Total migration time
11
Pre-copy migration
12
Managed Migration(Xen) 1.1st round: – Copy all memory pages to destination machine. – Replace shadow page table to original one, and mark all pages read-only. – Create a dirty bit map for the VM. 2.2 nd -(n-1)th round: – During the pages transferring, if VM want to modify a page, it will invoke Xen to set the appropriate bit in the dirty bit map. – Dirty pages will be resend again. – Reset dirty bit map for the next round. 3.nth round: – When the Dirty rate is bigger than up bound, begin to do stop- and-copy.
13
User Interface
16
Demo- Migration Source
17
Demo- Migration Destination
18
13 nodes plus a master to sort 8.6GB data. Local – All images are on local disks. SFS_local – All images are on local disks and are accessed via Gluster file system. SFS_remote – All images are on remote disks and are accessed over Gluster file system. Hadoop Benchmark
19
Performance Comparison localSFS_localSFS_remote 87.50103.90160.70 Sec Seconds
20
Q&A
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.