Download presentation
Presentation is loading. Please wait.
Published byClinton Gregory Modified over 6 years ago
1
VDN: Virtual Machine Image Distribution Network for Cloud Data Centers
Chunyi Peng1, Minkyong Kim2, Zhe Zhang2, Hui Lei2 1University of California, Los Angeles 2IBM T.J. Watson Research Center IEEE INFOCOM 2012 Orlando, Florida USA
2
Cloud Computing the delivery of Computing as a Service Infocom 2012
C Peng (UCLA)
3
Service Access in Virtual Machine Instances
Cloud Clients Web browser, mobile app, thin client, terminal emulator, … Software as a Service (SaaS) CRM, , virtual desktop, communications, games, … Application Platform as a Service (PaaS) Execution runtime, database, web server, development tools, … Platform Infrastructure as a Service (IaaS) Virtual machines, server storage, load balancer, networks, … structure Infra VM Client Service Requests (e.g. HTTP) Problem: On-demand VM provisioning Picture source: Infocom 2012 C Peng (UCLA)
4
Time for VM Image Provisioning
User request Req process VM Bootup VM image transfer Our focus: Transfer time time Response in several or tens of minutes in reality! Infocom 2012 C Peng (UCLA)
5
Why Slow? VM image files are large (several or tens of GB)
Centralized image storage becomes a bottleneck ToR switch Access Data Center Aggregation Core Image-server RH5.6 RH5.6 Infocom 2012 C Peng (UCLA)
6
Roadmap Basic VDN idea: enable collaborative sharing
VDN solution on efficient sharing Basic sharing units Metadata management Performance evaluation Conclusion Infocom 2012 C Peng (UCLA)
7
VDN: Speedup VM Image Distribution
Enable collaborative sharing Utilize the “free” VM images Exploit source diversity and make full use of network bandwidth ToR switch Access Aggregation Core Image-server RH5.6 RH6.0 RH5.5 RH5.6 Infocom 2012 C Peng (UCLA)
8
How to Enable Collaborative Sharing?
What is the basic data unit for sharing? File-based sharing: Allow sharing only among same files Chunk-based sharing: Allow sharing of common chunks from different files How to manage content location information? Centralized solution: directory service, etc. Distributed solution: P2P overlay, etc. Infocom 2012 C Peng (UCLA)
9
What is the Appropriate Sharing Unit?
Two factors The number of the same, alive VM image instances The similarity of different VM images Conduct real trace analysis and cross-image similarity measurement VM traces from six operational data centers for 4 months VM images including different Linux/Windows versions, IBM services (DB2, Rational, WebSphere) etc Infocom 2012 C Peng (UCLA)
10
VM Instance Popularity
The distribution of image popularity is highly skewed A few popular images take a large portion of VM instances Many unpopular images have a small number of VM instances (< 5) Few peers can involve in file-based sharing Unpopular VM images Infocom 2012 C Peng (UCLA)
11
VM Instance Lifetime The lifetime of VM instance varies
40% instances (more popular VM instances) < 13 minutes The unpopular VM images have longer lifetime VM image distribution network should cope with various lifetime instances 13 min Infocom 2012 C Peng (UCLA)
12
VM Image Structure Tree-based VM image structure Red Hat Linux SUSE ……
Windows Services Misc (60%) (25%) (11%) (4%) Red Hat SUSE (53%) …… Enterprise Linux v5.5 (32bit) (26.6%) Enterprise Linux v5.5 (64bit) (18.7%) … Enterprise Linux v5.4 (32bit) (4%) Enterprise Linux v5.6 (32bit) (0.2%) Database …… IDE …… V7.0 B (0.7%) V S P (0.7%) V R B (0.3%) V S B (0.3%) V S D (0.2%) V7.0 P (0.1%) (7%) Web app. server Infocom 2012 C Peng (UCLA)
13
VM Image Similarity High similarity across VM images
Chunk schemes: fixed size and Rabin fingerprinting Similarity: Sim(A,B) = |A’s chunks that appear in B| /|A| Chunk-based sharing can exploit cross-image similarity Infocom 2012 C Peng (UCLA)
14
Enable Chunk-based Sharing
Decouple VM images into VM chunks Exploit similarity across VM images Provide a higher source diversity and sharing opportunity RH5.5 RH5.6 RH5.6 RH6.0 RH5.6 RH5.6 Questions: How to maintain chunk location information (metadata) How to be scalable and also enable fast data transmission Infocom 2012 C Peng (UCLA)
15
How to Manage Location Information?
Solution I: centralized metadata server Cons: be simple Pros: bottleneck at metadata server Solution II: P2P overlay network, e.g., DHT Cons: distributed operations Pros: be unaware of data center topology and may introduce high network overhead Internet I-S Infocom 2012 C Peng (UCLA)
16
Issues in Conventional P2P Practice
One logic operation (lookup/publish) Multiple physical hops Hop costs (e.g. time) can be high! Solution: Reduce # of hops Reduce the cost of physical hops Keep it local or with close buddies Infocom 2012 C Peng (UCLA)
17
Topology-aware Metadata Management
Divide all the hosts into different-level hierarchies and manage chunks in each hierarchy Utilize static/quasi-static (controlled) topology Exploit high bandwidth local links in hierarchical structure Internet I-S L1 H L2 L3 Infocom 2012 C Peng (UCLA)
18
VDN: Encourage Local Communication
Local chunk metadata storage Index nodes maintain only metadata within this hierarchy Unnecessary to maintain a global view at all index nodes Local chunk metadata operation (e.g., lookup/publish) Ask close index nodes first Lower operation overhead Local chunk data delivery Enable high bandwidth transmission between close hosts (e.g. within the rack) Infocom 2012 C Peng (UCLA)
19
VDN Operation Flows Recursive operation from lower-hierarchy to higher- hierarchy L2 Image-server Local Cache L3 A. Metadata update B. Metadata lookup C. Data transmission L1 1. 2. 3C. 3B. 3A. 4A 4B 5 Infocom 2012 C Peng (UCLA)
20
Performance Evaluation
Setting One-month real trace driven simulation VM image: 128MB~ 8GB Tree topology: 4x 4 x 8 (128 nodes) Network bandwidth: Static throughput for one physical link Queue-based simulation for multiple transmissions on one link Schemes Baseline: centralized operation Local: fetch VM chunks from local host if possible VDN: enable collaborative sharing I-S disk I/O: 1Gbps Net BW: 1Gbps 2Gbps 500Mbps 200Mbps (4-) (8-nodes) Infocom 2012 C Peng (UCLA)
21
Great Speedup on Image Distribution
S1 data center S6 data center at S6, VM image size = 4GB Infocom 2012 C Peng (UCLA)
22
Scalable to Heavy Traffic Loads
Adjust time-of-arrival using factor 1-60 S6, Median S6, 90th Infocom 2012 C Peng (UCLA)
23
Low Metadata Management Overhead
Compare with three metadata management schemes Naïve: on-demand topology-aware broadcast Flat: manage metadata in a ring (e.g. DHT, P2P) Topo: topology-aware design (VDN) Assume the communication cost is 1:4:10 (reverse to bandwidth) (a) Number of messages (b) Communication cost Infocom 2012 C Peng (UCLA)
24
Conclusion VDN is a network-aware P2P paradigm for VM image distribution Reduce image provisioning time Achieve the reasonable overhead Chunk-based sharing exploit inherent cross-image similarity Network-aware operations can optimize the performance in the context of data centers Infocom 2012 C Peng (UCLA)
25
THANKs Infocom 2012 C Peng (UCLA)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.