Download presentation
Presentation is loading. Please wait.
Published byMiles Blake Modified over 9 years ago
1
Container-based OS Virtualization A Scalable, High-performance Alternative to Hypervisors Stephen Soltesz, Herbert Pötzl, Marc Fiuczynski, Andy Bavier & Larry Peterson
2
1 PlanetLab Usage Typical Node (2.4GHz, 1GB, 100-200GB disk) ~250-300 configured VM file systems on disk 40-90 resident VMs with ≥ 1 process 5-20 active VMs using CPU 80 60 40 0 20 100 Number of Resident VMs 25 20 15 10 5 0 Number of Active VMs 30
3
2 What is the Trade-Off?
4
3 Usage Scenarios Efficiency -> Performance IT Data Centers Grid, HPC Clusters Efficiency -> Low-overhead Linux-based Phone OLPC Laptops Enhanced WIFI Routers Efficiency -> Scalability Web Hosting Amazon EC2 PlanetLab, VINI Network Research
5
4 Presentation Outline Why Container-based OS Virtualization? High-level Design Hypervisor Container-based OS Guest VM Environment Xen VServer Evaluation
6
5 Hypervisor Design Driver Domain
7
6 Container Design VM1VM2VMn
8
7 Feature Comparison HypervisorContainer Multiple Kernels X Load Arbitrary Modules X Local AdministrationAll Live Migration OpenVZ Live System UpdateX Zap
9
8 Presentation Outline Why Container-based OS Virtualization? High-level Design Hypervisor Container-based OS Guest VM Environment Xen VServer Evaluation
10
9 Xen 3.0 Guest VM I/O Path Process to Guest OS Guest OS to IDD Resource Control Driver Domain Map Virtual Devices CFQ for disk HTB for network Security Isolation Hypervisor Access Physical Level PCI Address Virtual Memory Resource Control Hypervisor Allocate Resources Schedule VMs Schedules All VMs Guest VM & IDD Scheduled Two levels scheduling in Guest
11
10 VServer 2.0 Guest VM Security Isolation Access to Logical Objects Context ID Filter User IDs SHM & IPC address File system Barriers Resource Control Map Container to HTB for Network CFQ for Disk Logical Limits Processes Open FD Memory Locks Optimizations File-level Copy-on-write I/O Path Process to COS Scheduler Single Level Token Bucket Filter preserves O(1) scheduler
12
11 VServer Implementation 8,700 lines across 350+ files Leverage existing implementations Applied to Logical Resources Not architecture specific MIPS, ARM, SPARC, etc.. Low Overhead
13
12 Guest Comparison Xen 3.0VServer 2.0 Level of VirtualizationPhysicalLogical Resource ControlHTB, CFQ, etc Scheduler2-levels: Hyp + Guest1-level I/O Path3 transfers2 transfer
14
13 Configuration KernelLinuxVServer 2.0Xen 3.0.4 Version2.6.16.33 DistributionFedora Core 5 File systemIndependent LVM Partitions SchedulerO(1)O(1)+TBFCredit MachineHP DL360 G4p CPU2 x 1 core Xeon with 2MB L2 Network2 Port GbE Memory4 GB Hardware System Software
15
14 Network I/O: TCP Receive
16
15 Disk I/O: Write
17
16 CPU & Memory Performance
18
17 Performance at Scale - UP
19
18 Performance at Scale - SMP
20
19 Conclusion Virtualization for Manageability Variety of current Implementations No one-size-fits-all solution Hypervisors offer compelling features Containers are built on well understood technology Isolation & Efficiency Trade-off When trade-off is possible… VServer as alternative Native Efficiency I/O Low-Overhead Implementation More Scalable
21
20 Questions Thank you
22
21
23
22 Speculation on Future Trends Future improvements to both platforms COS-Linux + Linux-as-Hypervisor (KVM)
24
23 Conclusion Performance, Lower-Overhead, Scalability
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.