Presentation is loading. Please wait.

Presentation is loading. Please wait.

Eric Burgener VP, Product Management Alternative Approaches to Meeting VDI Storage Performance Req’ts July 2012.

Similar presentations


Presentation on theme: "Eric Burgener VP, Product Management Alternative Approaches to Meeting VDI Storage Performance Req’ts July 2012."— Presentation transcript:

1 Eric Burgener VP, Product Management Alternative Approaches to Meeting VDI Storage Performance Req’ts July 2012

2 Agenda  The storage challenge in VDI environments  Profiling VDI workloads  Focus on SSD  An alternative approach: the storage hypervisor  Customer case studies July 20122

3 VM ❶ Poor performance Very random, write-intensive workload Spinning disks generate fewer IOPS Storage provisioning trade-offs Virtual Machines ❷ Poor capacity utilization Over-provisioning to ensure performance Performance trade-offs ❸ Complex management Requires storage expertise Imposes SLA limitations Limits granularity of storage operations The VM I/O Blender Hidden Storage Costs of Virtualization 3 July 2012

4 VM ❶ Poor performance Very random, write-intensive workload Spinning disks generate fewer IOPS Storage provisioning trade-offs Virtual Machines ❷ Poor capacity utilization Over-provisioning to ensure performance Performance trade-offs ❸ Complex management Requires storage expertise Imposes SLA limitations Limits granularity of storage operations The VM I/O Blender Hidden Storage Costs of Virtualization Can decrease storage performance by 30% - 50% 4 July 2012

5 VDI Environments Are Even Worse July 20125  Windows desktops generate a lot of small block writes  IOPS vs throughput needs  Even more write-intensive due to many more VMs/host  Much wider variability be- tween peak and average IOPS  Boot, login, application, logout storms  Add’l storage provisioning, capacity consumption issues VIRTUAL DESKTOPS

6 As If It’s Not Already Hard Enough… July 20126 Thick VMDKs Fixed VHDs Fully provisioned Thin VMDKs Dynamic VHDs Thin provisioned Linked Clones Differencing VHDs Writable clones HIGH PERFORMANCE Slow provisioning Poor space utilization SPACE-EFFICIENT RAPID PROVISIONING Poor performance RAPID PROVISIONING SPACE-EFFICIENT Poor performance Hypervisor storage options force suboptimal choices

7 Thick, Thin, and Snapshot Performance July 20127

8 Sizing VDI Storage Configurations: The Basics July 20128 1.PERFORMANCE (latency, IOPS) Steady state I/O Peak I/O Read/write ratios Sequential vs random I/O RAID reduces usable capacity RAID increases “actual” IOPS Appropriate RAID levels 2. AVAILABILITY (RAID) Logical virtual disk capacities Snapshot/clone creation/usage Secondary storage considerations Capacity optimization technology 3. CAPACITY

9 Single Image Management  How can we take advantage of many common images? July 20129  vSphere: parent VMs, snapshots, replicas, linked clones Parent VMSnapshotReplica … Linked Clones Reads from replica Changes to delta disks Space efficient Poor performance  Compose, re-compose and refresh workflows

10 Reads vs Writes in VDI Environments July 201210 Understand read/write ratios for steady state and burst scenarios VDI is VERY write intensive (80%+) Read caches do not help here Logs or write back cache help Steady state XP Desktops7 – 15 IOPS Steady state W7 Desktops15 – 30 IOPS Burst IOPS30 – 300 IOPS VMware recommendations, May 2012 READ INTENSIVE: Boot, login, and application storms, AV scans WRITE INTENSIVE: Steady state VDI IOPS, logout storms, backups* GOLDEN MASTERS: Read only, great place to use “fast” storage * Depending on how backups are done

11 What Are My Options?  Adding spindles adds IOPS  Tends to waste storage capacity  Drives up energy, backup costs July 201211 BUY MORE STORAGE  Add to host or SAN  Promises tremendously lower I/O latencies  Easy to add  Focus on $/IOPS SOLID STATE DISK  Add higher performance drives (if available)  Upgrade to a higher performance array  Increased storage complexity BUY FASTER STORAGE

12 Focus On SSD  Extremely high read performance with very low power consumption  Generally deployed as a cache where you’ll need 5% - 10% of total back end capacity  Deploy in host or in SAN  Deployment option may limit HA support  3 classes of SSD: SLC, enterprise MLC, MLC  SSD is expensive ($60-$65/GB) so you’ll want to deploy it efficiently July 201212

13 Understanding SSD Performance July 201213 100% read max IOPS115,000 100% write max IOPS 70,000 100% random read max IOPS 50,000 100% random write max IOPS 32,000

14 What They Don’t Tell You About SSD July 201214  The VDI storage performance problem is mostly about writes  SSD is mostly about read performance  But there are VDI issues where read performance helps  Write performance is not predictable  Can be MUCH slower than HDDs for certain I/Os  Amdahl’s Law problem: SSDs won’t deliver promised performance, it just removes storage as the bottleneck  Using SSD efficiently is mostly about the software it’s packaged with  Sizing is performance, availability AND capacity

15 Good Places To Use SSD July 201215  Cache  In host: lowest latencies but doesn’t support failover  In SAN: still good performance and CAN support failover  Golden masters  Where high read performance is needed for various “storms”  Tier 0  To primarily boost read performance  If you don’t use SSD as a cache  Keep write performance trade-offs in mind when deploying SSD

16 The Storage Hypervisor Concept July 201216  Server hypervisors increase server resource utilization and virtualize server resources  Increases resource utilization and improves flexibility  Storage hypervisors increase storage utilization and virtualize storage resources  Increases storage utilization and improves flexibility SERVER HYPERVISOR STORAGE HYPERVISOR … Performance, capacity and management implications

17 Introduction of a Dedicated Write Log Per Host 17 Tier 1 Tier n Tiered Storage Optimized asynchronous de-staging  Log turns random writes into a sequential stream  Storage devices can perform up to 10x faster  De-staging allows data to be laid out for optimum read performance  Minimizes fragmentation issues  Requires no add’l hardware to achieve large performance gains  The more write intensive, the better the speed up  Excellent recovery model for shared storage environments Dedicated write log Optimized Writes Acknowledgements … HYPERVISOR Optimized Reads HOST July 2012

18 Log De-Couples High Latency Storage Operations 18 … HYPERVISOR HOSTS Shared Storage Write Logs Storage Pool THIN PROVISIONING ZERO IMPACT SNAPSHOTS HI PERFORMANCE CLONES INSTANT PROVISIONING July 2012 THESE OPERATIONS NO LONGER IMPACT VM PERFORMANCE

19 The Virsto Storage Hypervisor 19 Fundamentally changes the way hypervisors handle storage I/O Improves performance of existing storage by up to 10x Thin provisions ALL storage with NO performance degradation Reduces storage capacity consumption by up to 90% Enables almost instant provision- ing of high performance storage Reduces storage provisioning times by up to 99% Allows VM-centric storage manage- ment on top of block-based storage Enables safe provisioning and de- provisioning of VMs by anyone July 2012

20 Virsto Architecture 20  Integrates log architecture transparently into hypervisor  Speeds ALL writes ALL the time  Read performance speedups  Storage tiering, optimized layouts  Instant provisioning of space-efficient, high performance storage  Scalable snapshots open up significant new use cases  Software only solution that requires NO new hardware July 2012 Hypervisor Block Storage Capacity (RAID) Server Host Primary Storage Slow, random I/O Virsto vSpace Virsto VSA Virsto VSA Sequential I/O Virsto vLog Optimized de-staging

21 Multi Node Architecture For Scalability 21July 2012 Block Storage Capacity (RAID) Multiple Different Arrays Block Storage Capacity (RAID) Virsto vSpace Hypervisor Host 1 Virsto VSA Virsto VSA Sequential I/O Virsto vLog Hypervisor Host 2 Virsto VSA Virsto VSA Sequential I/O Virsto vLog Hypervisor Host N Virsto VSA Virsto VSA Sequential I/O Virsto vLog …

22 Integrated Virsto Management  Install and configure through Virsto Console  Provision Virsto ONCE up front  Uses standard native workflows  vSphere, Hyper-V  Transparently uses Virsto storage  Higher performance, faster provisioning, lower capacity consumption, cluster-aware  Works with native tools so minimal training July 201222

23 Virsto And SSD July 201223  Virsto achieves 10x performance speedups WITHOUT SSD and with what you already own  But Virsto logs and storage tier 0 are great places to use SSD  Easily uses 50% less SSD than caching approaches to get comparable speedups  Logs are only 10GB in size per host  We make random writes perform 2x+ faster on most SSD  Very small tier 0 to get read performance (for golden masters, certain VMs)  If you want to use SSD, you spend a lot less money to implement it

24 Proof Point: Higher Education July 201224 Baseline EnvironmentPerformance with Virsto 341 IOPS Native VMDKs 3318 IOPS Virsto vDisks 10X more IOPS 24% lower latency 9x CPU cycle reduction Virsto for vSphere December 2011 Results

25 2926 IOPS Virsto vDisk 165 IOPS Native VMDKs Proof Point: State Government 18X more IOPS 1758% better throughput 94% lower response time July 201225

26 Proof Point: Desktop Density July 201226 With Virsto, each host supports over 2x times the number of VDI sessions, assuming the same storage configuration 401 830 2000

27 Case Study 1: Manufacturing July 201227  1200 Windows 7 desktops  Common profile  Steady state: 12 IOPS  Read/write ratio: 10/90  Peak load: 30 IOPS  25GB allocated/desktop  Need vMotion support now  HA as a possible future  Windows updates 4/year  Already own an EMC VNX  40U enclosure w/4 trays  10K rpm 900GB SAS  100 drives = 90TB REQUIREMENTS  Would like to maximize desktop density to minimize host count  Target is 125-150 desktops/host  Will be using vSphere 5.1  Spindle minimization could accommodate other projects  Open to using SSD in VNX  400GB EFDs  Asked about phasing to minimize peak load requirements  Asked about VFcache usage

28 Comparing Options Without SSD July 201228 MetricStorage Config w/o Virsto Storage Config w/ Virsto Virsto Savings IOPSNeeded: 36,000 IOPS Delivered: 36,010 IOPS Drive count: 277 drives (130 IOPS/drive) Needed: 36,000 IOPS Delivered: 39,000 IOPS Drive count: 30 drives (1300 IOPS/drive) 247 drives Capacity249TB (raw) 206.7TB (w/RAID 5) (34TB needed) 27TB (raw) 22.4TB (w/RAID 5) Thin provisioned Easily looks like 80TB+ 222TB (raw) 184.3TB (w/RAID 5) Provisioning time83 hrs per compose 1 min VM creation 100MB/sec network 17 hrs per compose 1 min VM creation 66 hrs savings per compose operation 4 x 66 = 256 hrs/yr Incremental Cost (list)$309,750 (177 add’l drives) + 5700 upgrade $0 70 drives freed up $309,750 + savings on other projects

29 Virsto Single Image Management July 201229 vSnap of golden master 25GB logical 12GB (Windows) vClones vClone 0 Stabilizes at 2GB* 12GB + 2TB = 2TB Actual Space Consumed for Virsto vClone 1 Stabilizes at 2GB* vClone 2 Stabilizes at 2GB* vClone 999 Stabilizes at 2GB* … * Based on 8 different LoginVSI runs with 1000-2000 desktops  With EZT VMDKs, native consumption was 25GB x 1000 = 25TB  With thin VMDKs, native consumption would be 25GB + (14GBx1000) = 14TB  And would require 5x as many spindles for IOPS  Not workable, too many drives/arrays, etc.  With View Composer linked clones, space consumption would be the same as Virsto but you’d need 5x the spindle count  Virsto provides better than EZT VMDK performance with space savings of linked clones

30 Virsto Single Image Management July 201230 vSnap of golden master 25GB logical 12GB (Windows) vClones vClone 0 Stabilizes at 2GB* 12GB + 2TB = 2TB Actual Space Consumed for Virsto vClone 1 Stabilizes at 2GB* vClone 2 Stabilizes at 2GB* vClone 999 Stabilizes at 2GB* … * Based on 8 different LoginVSI runs with 1000-2000 desktops  With EZT VMDKs, native consumption was 25GB x 1000 = 25TB  With thin VMDKs, native consumption would be 25GB + (14GBx1000) = 14TB  And would require 5x as many spindles for IOPS  Not workable, too many drives/arrays, etc.  With View Composer linked clones, space consumption would be the same as Virsto but you’d need 5x the spindle count  Virsto provides better than EZT VMDK performance with space savings of linked clones Virsto 92% better than thick VMDKs Virsto 86% better even than thin VMDKs

31 Assumptions July 201231  EMC VNX SSD performance  12K read IOPS, 3K write IOPS per SSD  With 2 SPs, can max out a tray w/o limiting performance  VM creation time depends on busy-ness of vCenter Server  Observed 30 sec - 1 min baseline across both Virsto and non-Virsto configs  5 min VM+storage creation time w/o Virsto, 1 min w/Virsto  Customer had chosen thick VMDKs for performance/spindle minimization  Provisioning comparisons were EZT VMDKs against Virsto vDisks (which outperform EZTs handily)  Customer RAID 5 overhead was 17%  (5 + 1) RAID  Pricing for EMC VNX 5300  200GB EFD $12,950  900GB SAS 10K RPM $1,750

32 Case Study 1: Other Observations July 201232  Virsto vClones do not have the 8 host limit  View Composer linked clones in VMFS datastores limited to 8 hosts  Performance + capacity considerations limit applicability of SSD to this environment  Using RAID 5, minimum capacity required is 33.9TB  Customer could not have met storage requirement with VNX 5300  Would have to upgrade to VNX 5700 or buy extra cabinets  Thin provisioned Virsto vDisks provide significant capacity cushion  Virsto vClones expected to save 66 hours provisioning time for high performance storage on each re-compose  That’s up to 256 hours per year clock time for provisioning (4 Windows updates)

33 Case Study 2: Financial Services July 201233  1000 Windows 7 desktops  Common profile  Steady state: 20 IOPS  Read/write ratio: 10/90  Peak load: 60 IOPS  30GB allocated/desktop  Need vMotion support now  HA as a possible future  Windows updates 6/year  Would be buying new SAN storage REQUIREMENTS  Would like to maximize desktop density to minimize host count  Target is 125-150 desktops/host  Will be using vSphere 5.1  Spindle minimization could accommodate other projects  Wants to use a SAN and open to using SSDs  Asked about phasing to minimize peak load requirements

34 Comparing Options July 201234 MetricStorage Config w/o Virsto Storage Config w/ Virsto Virsto Savings IOPSNeeded: 60,000 IOPS 18 x 200GB EFD 54K IOPS 62 x 600GB 10K SAS 8060 IOPS Delivered: 62,060 IOPS 60,000 IOPS 8 x 200GB EFD 48K IOPS 16 x 600GB 10K SAS 20,800 IOPS Delivered: 68,800 IOPS 4 x 200GB EFD 26 x 600GB 10K SAS Capacity37.2TB (raw) 30.9TB (w/RAID 5) (30TB needed) 10.4TB (raw) 8.6TB (w/RAID 5) Easily looks like 35TB+ 26.8TB (raw) 22.3TB (w/RAID 5) Provisioning time100 hrs/compose 1 min VM creation 100MB/sec network 17 hrs/compose 1 min VM creation 83 hrs savings per compose operation 6 x 83 = 498 hrs/yr Cost (list)$233,100 for EFDs $133,000 for array/SAS $366,100 $103,600 for EFDs $64,000 for array/SAS $167,600 $198,500

35 Assumptions July 201235  IBM DS5000 SSD performance  12K read IOPS, 3K write IOPS per SSD  With 2 SPs, can max out a tray w/o limiting performance  VM creation time depends on business of vCenter Server  Observed 30 sec - 1 min baseline across both Virsto and non-Virsto configs  6 min VM+storage creation time w/o Virsto, 1 min w/Virsto  Customer had chosen thick VMDKs for performance/spindle minimization  Provisioning comparisons were EZT VMDKs against Virsto vDisks (which outperform EZTs handily)  Customer RAID 5 overhead was 17%  (5+1) RAID  Pricing for IBM DS5000  200GB EFD $12,950  600GB SAS 10K RPM $1,500, DS5000 frame $40K

36 Case Study 2: Other Observations July 201236  Virsto makes SSD perform twice as fast  Makes all writes sequential  Need 50% less SSD  10GB log in RAID 1 across 8 hosts = 160GB for logs, leaves 1.4TB available for Fast Cache/tier 0 use  Virsto cuts required raw storage capacity by 78%  And can accommodate an additional 300+ desktops w/o more storage hardware purchases  Space savings conservative at only 70%  Generally we see 80% - 90% space savings over the long term  Virsto vClones expected to save 83 hours provisioning time for high performance storage on each re-compose  That’s 498 hours per year across 6 Windows updates

37 Demonstrated Customer Value July 201237

38


Download ppt "Eric Burgener VP, Product Management Alternative Approaches to Meeting VDI Storage Performance Req’ts July 2012."

Similar presentations


Ads by Google