Presentation is loading. Please wait.

Presentation is loading. Please wait.

Public Clouds (EC2, Azure, Rackspace, …) VM Multi-tenancy Different customers’ virtual machines (VMs) share same server Provider: Why multi-tenancy? Improved.

Similar presentations


Presentation on theme: "Public Clouds (EC2, Azure, Rackspace, …) VM Multi-tenancy Different customers’ virtual machines (VMs) share same server Provider: Why multi-tenancy? Improved."— Presentation transcript:

1 Public Clouds (EC2, Azure, Rackspace, …) VM Multi-tenancy Different customers’ virtual machines (VMs) share same server Provider: Why multi-tenancy? Improved resource utilization Benefits of economies of scale VM 1 Tenant: Why Cloud? Pay-as-you-go Infinite Resources Cheaper Resources

2 Implications of Multi-tenancy VMs share many resources – CPU, cache, memory, disk, network, etc. Virtual Machine Managers (VMM) – Goal: Provide Isolation Deployed VMMs don’t perfectly isolate VMs – Side-channels [Ristenpart et al. ’09, Zhang et al. ’12] 2 VM VMM

3 Lies to by the Cloud Infinite resources All VMs are created equally Perfect isolation 3

4 This Talk Taking control of where your instances run Are all VMs created equally? How much variation exists and why? Can we take advantage of the variation to improve performance? Gaining performance at any cost Can users impact each other’s performance? Is there a way to maliciously steal another user’s resource? Is tehre

5 Heterogeneity in EC2 Cause of heterogeneity: – Contention for resources: you are sharing! – CPU Variation: Upgrades over time Replacement of failed machined – Network Variation: Different path lengths Different levels of oversubscription 5

6 Are All VMs Created Equally? Inter-architecture: – Is there differences between architectures – Can this be used to predict perform aprior? Intra-architecture: – Within an architecture – If large, then you can’t predict performance Temporal – On the same VM over time? – There is no hope! 6

7 Benchmark Suite & Methodoloy 7 Methodology: – 6 Workloads – 20 VMs (small instances) for 1 week – Each run micro-benchmarks every hour

8 Inter-Architecture 8

9 Intra-Architecture 9 CPU is predictable – les than 15% Storage is unpredictable --- as high as 250% CPU is predictable – les than 15% Storage is unpredictable --- as high as 250%

10 Temporal 10

11 Overall 11 CPU type can only be used to predict CPU performance For Mem/IO bound jobs need to empirically learn how good an instance is CPU type can only be used to predict CPU performance For Mem/IO bound jobs need to empirically learn how good an instance is

12 What Can We Do about it? Goal: Run VM on best instances Constraints: – Can control placement – can’t control which instance the cloud gives us – Can’t migrate Placement gaming: – Try and find the best instances simply by starting and stopping VMs 12

13 Measurement Methodology Deploy on Amazon EC2 – A=10 instances – 12 hours Compare against no strategy: – Run initial machines with no strategy Baseline varies for each run – Re-use machines for strategy

14 EC2 results Apache Runs MB/sec NER Runs Records/sec 16 migrations

15 Placement Gaming Approach: – Start a bunch of extra instances – Rank them based on performance – Kill the under performing instances Performing poorer than average – Start new instances. Interesting Questions: – How many instances should be killed in each round? – How frequently should you evaluate performance of instances. 15

16 Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor's Expense) (Venkat)anathan Varadarajan, Thawan Kooburat, Benjamin Farley, Thomas Ristenpart, and Michael Swift 16 D EPARTMENT OF C OMPUTER S CIENCES

17 Contention in Xen Same Core – Same core & same L1 Cache & Same memory Same Package – Diff core but share L1 Cache and memory Different Package – Diff core & diff Cache but share Memory 17

18 I/O contends with self VMs contend for the same resource – Network with Network: More VMs  Fair share is smaller – Disk I/O with Disk I/O: More disk access  longer seek times Xen does N/W batching to give better performances – BUT: this adds jitter and delay – ALSO: you can get more than your fairshare because of the batch 18

19 I/O contends with self VMs contend for the same resource – Network with Network: More VMs  Fair share is smaller – Disk I/O with Disk I/O: More disk access  longer seek times Xen does N/W batching to give better performances – BUT: this adds jitter and delay – ALSO: you can get more than your fairshare because of the batch 19

20 Everyone Contends with Cache No contention on same core – VMs run in serial so access to cache is serial No contention on diff package – VMs use different cache Lots of contention when same package – VMs run in parallel but share same cache 20

21 Contention in Xen 21 Local Xen Testbed MachineIntel Xeon E5430, 2.66 Ghz CPU2 packages each with 2 cores Cache Size6MB per package VM Non-work-conserving CPU scheduling Work-conserving scheduling 3x-6x Performance loss  Higher cost

22 This work: Greedy customer can recover performance by interfering with other tenants Resource-Freeing Attack What can a tenant do? 22 Pack up VM and move (See our SOCC 2012 paper) … but, not all workloads cheap to move VM Ask provider for better isolation … requires overhaul of the cloud

23 Resource-freeing attacks (RFAs) What is an RFA? RFA case studies 1.Two highly loaded web server VMs 2.Last Level Cache (LLC) bound VM and highly loaded webserver VM Demonstration on Amazon EC2 23

24 The Setting Victim: – One or more VMs – Public interface (eg, http) Beneficiary: – VM whose performance we want to improve Helper: – Mounts the attack Beneficiary and victim fighting over a target resource Helper 24 VM Victim Beneficiary

25 Example: Network Contention 25 Net Clients What can you do? Victim Beneficiary Local Xen Test bed

26 Recipe for a Successful RFA Shift resource away from the target resource towards the bottleneck resource 26 Shift resource usage via public interface Proportion of Network usage CPU intensive dynamic pages Static pages Proportion of CPU usage Push towards CPU bottleneck Reduce target resource usage Limits

27 Ways to Reduce Contention? Break into victim VM and disable it 27 Net Clients Local Xen Test bed But: Requires knowledge of vulnerability Drastic Easy to detect Helper Victim Beneficiary The good: frees up resources used by victim

28 Ways to Reduce Contention? Do a simple DoS attack? This may NOT free up target resources 28 Net Clients Local Xen Test bed Backfires: May increase the contention Helper SYN flood Victim Beneficiary

29 An RFA in Our Example 29 Net Helper CGI Request CPU Utilization Clients Result in our testbed: Increases beneficiary’s share of bandwidth No RFA: 1800 page requests/sec W/ RFA: 3026 page requests/sec 50%  85% share of bandwidth

30 Shared CPU Cache: – Ubiquitous: Almost all workloads need cache – Hardware controlled: Not easily isolated via software – Performance Sensitive: High performance cost! 30 Resource-freeing attacks 1) Send targeted requests to victim 2) Shift resources use from target to a bottleneck Can we mount RFAs when target resource is CPU cache? Can we mount RFAs when target resource is CPU cache?

31 Cache Contention 31 RFA Goal

32 Case Study: Cache vs. Network Victim : Apache webserver hosting static and dynamic (CGI) web pages Beneficiary: Synthetic cache bound workload (LLCProbe) Target Resource: Cache No cache isolation: – ~3x slower when sharing cache with webserver 32 Net Cache $$$ Clients Local Xen Test bed Victim Beneficiary Core

33 Net Cache vs. Network Victim webserver frequently interrupts, pollutes the cache – Reason: Xen gives higher priority to VM consuming less CPU time Cache 33 Clients $$$ Cache state time line Beneficiary starts to run Core decreased cache efficiency Webserver receives a request Heavily loaded web server cache state

34 Net Cache vs. Network w/ RFA RFA helps in two ways: 1.Webserver loses its priority. 2.Reducing the capacity of webserver. Cache 34 Clients $$$ Cache state time line Core Helper Heavily loaded webserver requests under RFA CGI Request Beneficiary starts to run Webserver receives a request Heavily loaded web server cache state

35 RFA: Performance Improvement 35 RFA intensities – time in ms per second 196% slowdown 86% slowdown 60% Performance Improvement 60% Performance Improvement

36 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 36 VM

37 Limitations Experiments setup: – Only 1 VMs in each experiment – Don’t vary the number of each type of job 37

38 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 38 VM

39 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 39 VM

40 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 40 VM

41 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 41 VM

42 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 42 VM

43 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 43 VM

44 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 44 VM

45 Discussion: Practical Aspects RFA case studies used CPU intensive CGI requests – Alternative: DoS vulnerabilities (Eg. hash-collision attacks) Identifying co-resident victims – Easy on most clouds (Co-resident VMs have predictable internal IP addresses) No public interface? – Paper discusses possibilities for RFAs 45 VM


Download ppt "Public Clouds (EC2, Azure, Rackspace, …) VM Multi-tenancy Different customers’ virtual machines (VMs) share same server Provider: Why multi-tenancy? Improved."

Similar presentations


Ads by Google