Presentation is loading. Please wait.

Presentation is loading. Please wait.

VMware Virtual SAN Hyper-converged infrastructure software

Similar presentations


Presentation on theme: "VMware Virtual SAN Hyper-converged infrastructure software"— Presentation transcript:

1 VMware Virtual SAN Hyper-converged infrastructure software
Simon Todd EMEA SE Specialist - VSAN

2 Agenda 1 Introduction 2 Virtual SAN, what is it? 3
Virtual SAN, a bit of a deeper dive 4 Virtual SAN Recent Enhancements 5 Wrapping up

3 The Software Defined Data Center
All infrastructure services virtualized: compute, networking, storage Underlying hardware abstracted, resources are pooled Control of data center automated by software (management, security) Virtual Machines are first class citizens of the SDDC Today’s session will focus on one aspect of the SDDC - storage Management The Software Defined Data Center In SDDC, all three core infrastructure components, compute, storage and networking are virtualized. Virtualization software abstracts underlying hardware, while pooling compute, network and storage resources to deliver better utilization, faster provisioning and simpler operations. The VM becomes the centerpiece of the operational model, providing automation and agility to repurpose infrastructure according to business needs. Today we will focus on Storage, which has been growing at an extremely rapid pace and is a fast changing aspect of the datacenter! Compute Networking Storage

4 Hardware evolution started the storage revolution

5 What is our goal with Virtual SAN?
$ Traditionally, storage has been a pain point in many enterprise IT environments. It is expensive to procure, ties you not only to the hardware but also the software services of a specific vendor. The hardware dependency results in very slow adoption of emerging technologies (CPU, network, and of course storage devices such as new gen Flash). The result is high costs while using obsolete technology. Storage is even more expensive to operate requiring storage admins who often are vendor-specific specialists. Yet another reason why so many of our customers are de facto one-vendor shops. IOW less choice. The management models are not scalable to the 10Ks of VMs customers have today. Let alone the 100Ks of containers they may have in the future! Our vision is to change the way storage is consumed and managed in Enterprise environments. Drastically simplify datacenter operations. Holistically: compute, network, storage. Not 3 completely different sets of products and management tools. Software, not hardware. Take advantage of the latest and greatest. Scale-out architecture. High performance and scalability. Gradual investments using commodity components across infrastructure. Lower costs. By offering our customers choice we aim to change the world of IT, start a new revolution. SIMPLICITY PERFORMANCE AND SCALABILITY COST CHOICE AGILITY

6 Innovation to maintain competitiveness for Sky service offerings
Sky is a Cross European company providing satellite broadcasting, on demand internet streaming media and broadband / telephone services. It is Europe’s biggest and leading media company and largest pay-TV broadcaster, with over 21 million customers and over 30,000 employees Business Driver: Innovation to maintain competitiveness for Sky service offerings CONFIDENTIAL

7 Business Driver: Innovation to maintain competitiveness for Sky service offerings $ Ability to spin up environments for our developers within minutes rather than hours, days or even weeks Time to market is now a much lower cost exercise and brings forward revenue realization and improved profitability Our platform now delivers on performance and scalability and as a “Grow as you go” means we can increase as and when we need to

8 A water utilities company in the UK which manages the regulated water and waste water in an area of England and has over 5000 direct employees, it has a customer base of over 7 million customers Business Driver: IT could not respond to the speed of the business and new projects stalled due to high cost of incremental investment Business Risks: Heavy Government fines and reputation

9 Business Driver: IT could not respond to the speed of the business and new projects stalled due to high cost of incremental investment $ Our procurement cycle on existing infrastructure was between 13 and 26 weeks, with Virtual SAN our procurement cycle has reduced to just 7 days Spinning up of a new environment now takes at most 2 hours versus 7 days previously Compared to our previous storage, our Virtual Machine storage cost reduced significantly between 66% and 74% making IT more cost effective to the business Our billing cycle for over 7 million customers used to take 22 hours prior to it being virtualised, when we virtualised it we got that down to 16 hours. When we moved the same billing cycle onto Virtual SAN it now completes in just 3 hours 90% reduction in procurement time 98.5% reduction in deployment time 86% Reduction in billing cycle processing

10 Most Broadly Deployed HCI Solution in the Market
Over 3500 customers within 24 months and growing rapidly As you’ve seen VSAN offers your customers access to critical business benefits you will know where to look for these problems in your accounts who are the people with pain points, whether it be operations, marketing, product development and so on. Now I want to highlight a couple of key points to emphasise why VSAN offers your customers a rapid route to the solutions and benefits for their problems which is least disruptive in terms of skill sets and deployment risk and offers a very rapid ROI

11 Virtual SAN, what is it?

12 Virtual SAN, what is it? Hyper-Converged Infrastructure
vSphere & Virtual SAN Software-Defined Storage Distributed, Scale-out Architecture What is VSAN in a nutshell… So, it follows a hyper-converged architecture for easy, streamlined management and scaling of both compute and storage. Hyper-converged represents a system architecture – one where compute and persistence are co-located. This system architecture is enabled by software. It is a SDS product. A layer of software that runs on every ESXi host. It aggregates the local storage devices on ESX hosts (SSD and magnetic disks) and makes them look like a single pool of shared storage across all the hosts. VSAN has a distributed architecture with no single point of failure. VSAN goes a step further than other HCI products – VMware owns the most popular hypervisor in the industry. Strong integration of VSAN in the hypervisor means that we can optimize the data path and we ensure optimal resource scheduling (compute, network, storage) according to the needs of each application. At the end, better resource utilization means better consolidation ratios, more bang for your buck! Resource utilization is one part of the story. The other part is the Operational aspects of the product. VSAN has been designed as a storage product to be used primarily by vSphere admins. So, we put a lot of effort in packaging the product in a way that is ideal for today’s use cases of virtualized environments. Specifically, the VSAN configuration and management workflows have been designed as extensions of the existing host and cluster management features of vSphere. That means easy, intuitive operational experience for vSphere admins. It also means native integration with key vSphere features unlike any other storage product out there, HCI or not. Integrated with vSphere platform Ready for today’s vSphere use cases

13 But what does that really mean?
Exposing a single shared datastore Virtual SAN Leveraging local storage resources VMware vSphere & Virtual SAN Integrated with your Hypervisor Generic x86 hardware VSAN network

14 VMware vSphere + Virtual SAN
Virtual SAN Use Cases Business Critical Apps End User Computing DR / DA Test/Dev We were very conservative when we initially launched VSAN – after all, this was customers data we were talking about. However, even though we were conservative, our customer were not. There are plenty of other use cases. The ones listed on the slide are the most commonly used. It is fair to say that Virtual SAN fits in most scenarios: Of course customers started with the test/dev workloads, just like they did when virtualization was first introduced Business Critical Apps – We have customers running Exchange / SQL / SAP and billing systems on Virtual San Virtual SAN is included in the Horizon Suite Advanced and Enterprise, so VDI/EUC is a natural fit. As a DR destination VSAN is also commonly used as you can scale out and the cost is relatively low compared to a traditional storage system Isolation workloads also something that VSAN is often used for, both DMZ and Management clusters fit this bill Of course there is also ROBO, VSAN can start small and grow when desired, both scale-out and scale-up, and with 6.1 we even made things better by introducing a 2 node, but we will get back to that! DMZ Management Staging ROBO VMware vSphere + Virtual SAN

15 Broadest Deployment Options from HCI to SDDC
Built on Industry-Leading VMware Hyper-Converged Software (HCS) EVO SDDC vRealize Certified Solutions Engineered Appliances NSX VMware HCS Virtual SAN + vSphere + vCenter VMware HCS Virtual SAN + vSphere + vCenter VMware HCS Virtual SAN + vSphere + vCenter Lifecycle Management When it comes to deploying VSAN there are 3 options. By far the most popular option is the VSAN Ready Node - pre-installed and configured ready nodes (Ready to Run). These are pre-configured server models which have been fully certified for and tested with VSAN. Another option is an integrated out of box experience - HCI nodes from EMC offer an “on rails” solutions. Lastly EVO:SDDC (Not released yet) offers the capability to deploy VSAN, NSX, vRO and other VMware solutions end to end. An SDDC in a rack, which scales from half a rack to many... EVO SDDC Manager Virtual SAN Ready Nodes EMC Federation HCI Appliance Certified Partner Hardware

16 Tiered Hybrid vs All-Flash
Caching SSD PCIe Ultra DIMM SSD PCIe Ultra DIMM Writes cached first, Reads from capacity tier Read and Write Cache Virtual SAN enables both hybrid and all-flash architectures. Irrespective of the architecture, there is a flash-based caching tier which can be configured out of flash devices like SSDs, PCIe cards, Ultra DIMMs etc. The flash caching tier acts as the read cache/write buffer that dramatically improves the performance of storage operations. In the hybrid architecture, server-attached magnetic disks are pooled to create a distributed shared datastore, that persists the data. In this type of architecture, you can get up to 40K IOPs per server host. In All-Flash architecture, the flash-based caching tier is intelligently used as a write-buffer only, while another set of SSDs forms the persistence tier to store data. Since this architecture utilizes only flash devices, it delivers extremely high IOPs of up to 90K per host, with predictable low latencies. Data Persistence Virtual SAN Capacity Tier Flash Devices Reads go directly to capacity tier Capacity Tier SAS / NL-SAS / SATA 100K IOPS per Host + sub-millisecond latency 40K IOPS per Host

17 Really Simple Setup Manual or Automatic Disk Claiming Deduplication and Compression Enable? Fault Domains, 2 node or stretched cluster? Deployed, configured and manage from vCenter through the vSphere Web Client Radically simple Configure VMkernel interface for Virtual SAN Enable Virtual SAN by clicking Turn On

18 Virtual SAN, a bit of a deeper dive

19 Virtual Machine as a set of Objects on VSAN
VM Home Namespace VM Swap Object Virtual Disk (VMDK) Object Snapshot (delta) Object Snapshot (delta) Memory Object VM Home VM Swap Snap delta Snap memory Snapshot VMDK Objects are divided and distributed into components based on policies. Components and policies will be covered shortly. VMs are no longer based on a set of files, like we have on traditional storage.

20 New Capabilities in VSAN 6.2
Define a policy first… Virtual SAN currently surfaces multiple storage capabilities to vCenter Server What If APIs New Capabilities in VSAN 6.2 First thing you do before you deploy a VM is define a policy. VSAN has what if APIs so it will show what the “result” would be of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes Also note that a number of new capabilities were introduces in VSAN 6.2, and these will be discussed in more detail later on.

21 Virtual SAN Objects and Components
VMDK Object witness RAID-1 VSAN is an object store! Object Tree with Branches Each Object has multiple Components This allows you to meet availability and performance requirements Here is one example of “Distributed RAID” using 2 techniques: Striping (RAID-0) Mirroring (RAID-1) Data is distributed based on VM Storage Policy ESXi Host ESXi Host Mirror Copy Mirror Copy stripe-1b stripe-1a RAID-0 stripe-2b stripe-2a RAID-0 RAID-0 and RAID-1 were the only distributed RAID options up to and including version 6.1. New techniques introduced in VSAN 6.2 will be discussed shortly. ESXi Host

22 Number of Failures to Tolerate/Failure Tolerance Method
Defines the number of hosts, disk or network failures a storage object can tolerate. RAID-1 Mirroring used when Failure Tolerance Method set to Performance (default). For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required! esxi-01 esxi-02 esxi-03 esxi-04 vmdk ~50% of I/O witness RAID-1 RAID-5/6 used when Fault Tolerance Method set to Capacity Virtual SAN Policy: “Number of failures to tolerate = 1”

23 Assign it to a new or existing VM
When the policy is selected, Virtual SAN uses it to place / distribute the VM to guarantee availability and Performance

24 Fault Domains, increasing availability through rack awareness
Create fault domains to increase availability 8 node cluster with 4 defined fault domains (2 nodes in each) FD1 = esxi-01, esxi-02 FD3 = esxi-05, esxi-06 FD2 = esxi-03, esxi-04 FD4 = esxi-7, esxi-08 To protect against one rack failure only 2 replicas are required and a witness across 3 failure domains! FD1 FD2 FD3 FD4 Note that in order to protect against a rack failure the minimum required number of failure domains is 3, this is similar to protecting against a host failure using FTT=1 where the minimum number of hosts is 3. vmdk witness RAID-1 esxi-01 esxi-03 esxi-05 esxi-07 esxi-02 esxi-04 esxi-06 esxi-08 24

25 Virtual SAN, Recent Enhancements

26 VSAN 6.2 March 2016 VSAN 6.1 September 2015 VSAN 6.0
Deduplication and Compression RAID 5/6 support Software Checksum QoS via IOPS Limits IPv6 Performance Service Enhanced Capacity Views VSAN 6.2 March 2016 Stretched Cluster Replication - 5 Minutes RPO 2-node ROBO Health Monitoring & Remediation VSAN 6.1 September 2015 Stretched Cluster Support for ROBO Enhanced Replication Support for SMP-FT Support for Oracle RAC Support for Windows Server Failover Clustering New SSD HW options: Intel NVMe Diablo Ultra DIMM Solution Deployment options Virtual SAN On-Disk Format Upgrade Disk Group Bulk Claiming Disk Claiming per Tier Stretched Cluster Configuration Stretched Cluster Health Monitoring Health Check Plug-in in-box vRealize Operations Manager Integration Global data visualization Capacity planning Root-Cause analysis All Flash Configuration 64 node VSAN cluster x2 Hybrid Performance VSAN Snapshots/Clones Health UI Rack Awareness VSAN 6.0 March 2015 VSAN 5.5 March 2014

27 Virtual SAN – Stretched Cluster
Today Virtual SAN – Stretched Cluster vSphere & Virtual SAN Site Recovery Manager Active-Active data centers vSphere Virtual SAN cluster split across 2 sites! Each site is a Fault Domain (FD) Site-level protection with zero data loss and near-instantaneous recovery Support for up to 5ms RTT latency between data sites 10Gbps bandwidth expectation Witness VM can reside anywhere 200ms RTT latency 100Mbps bandwidth required at most Automated failover witness witness Any distance >5 min RPO Stretched storage with Virtual SAN will allow you to split the Virtual SAN cluster across 2 sites, so that if a site fails, you would be able to seamlessly failover to the other site without any loss of data. Virtual SAN in a stretched storage deployment will accomplish this by synchronously mirror data across the 2 sites. The failover will be initiated by a witness VM that resides in a central place, accessible by both sites. Bandwidth to witness is 10Mbps, or 2MB per 1000 components (worse case scenario - very little traffic is observed during steady state, but we need to calculate for owner migration, or site failure) VMware vSphere & Virtual SAN vmdk vmdk 5ms RTT, 10GbE

28 Advanced Troubleshooting with VSAN Health Check
CONFIDENTIAL

29 Virtual SAN 6.2 specifics

30 Deduplication and Compression for Space Efficiency
New in 6.2 All Flash Only Beta Deduplication and Compression for Space Efficiency Nearline deduplication and compression per disk group level. Enabled on a cluster level Deduplicated when de-staging from cache tier to capacity tier Fixed block length deduplication (4KB Blocks) Compression after deduplication If block is compressed <= 2KB Otherwise full 4KB block is stored esxi-01 esxi-02 esxi-03 vmdk vSphere & Virtual SAN All Flash Only. “High level description” Dedupe and compression happens during destaging from the caching tier to the capacity tier. You on a cluster level and deduplication/compression happens on a per disk group basis. Bigger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated they will be compressed. A significant saving already, combined with deduplication and the results achieved can be up to 7x space reduction, of course fully dependent on the workload and type of VMs. “Lower level description” Compression (LZ4) would be performed during destaging from the caching tier to the capacity tier. 4KB is the block size for deduplication. For each unique 4k block compression would be performed and if the output block size is less than or equal to 2KB, a compressed block would be saved in place of the 4K block. If the output block size is greater than 2KB, the block would be written uncompressed and tracked as such. The reason is to avoid block alignment issues, as well as reduce the CPU hit for decompressing the data which is greater than compression for data with low compression ratios. All of this data reduction is after the write acknowledgement. Deduplication domains are within each disk group. This avoids needing a global lookup table (significant resource overhead), and allows us to put those resources towards tracking a smaller and more meaningful block size. We purposefully avoid dedupe of “write hot data” In the cache, or decompressing uncompressible data significant CPU/memory resources can avoid being wasted. Note: Feature is supported with stretch clusters, ROBO edition Significant space savings achievable, making the economics of an all-flash VSAN very attractive

31 RAID-5/6 (Inline Erasure Coding)
New in 6.2 All Flash Only RAID-5/6 (Inline Erasure Coding) When Number of Failures to Tolerate = 1 and Failure Tolerance Method = Capacity  RAID-5 3+1 (4 host minimum) 1.33x overhead for RAID-5 instead of 2x compared to FTT=1 with RAID-1 When Number of Failures to Tolerate = 2 and Failure Tolerance Method = Capacity  RAID-6 4+2 (6 host minimum) 1.5x overhead for for RAID-6 instead of 3x compared to FTT=2 with RAID-1 RAID-5 Sometimes RAID 5 and RAID 6 over the network is also referred as erasure coding. This is done inline; there is no post-processing required. Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts. In this case RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts 1 can fail without data loss. This results in a significant reduction of required disk capacity. Normally a 20GB disk would require 40GB of disk capacity, but in the case of RAID-5 over the network the requirement is only ~27GB. There is another option if higher availability is desired Use case Information: Erasure codes offer “guaranteed capacity reduction unlike deduplication and compression. For customers who have “no thin provisioning policies” have data that is already compressed and deduplicated or have encrypted data this offers “known/fixed” capacity gains. This can be applied on a granular basis (Per VMDK) using the Storage Policy Based Management system. 30% Savings. Note: All Flash VSAN only. Note: Not supported with stretched clusters Note: this does not require the cluster size be a multiple of 4, just 4or more. ESXi Host parity data ESXi Host data parity ESXi Host data parity ESXi Host data parity

32 Software Checksum and disk scrubbing
New in 6.2 Software Checksum and disk scrubbing Overview End-to-end checksum of the data to detect and resolve silent disk errors due to faulty hardware/firmware Checksum is enabled by default (policy driven) If checksum verification fails on a read: VSAN fetches the data from another copy in RAID-1 VSAN recreates the data from other components in RAID-5/6 stripe Disk scrubbing is run in the background Benefits Provide additional level of data integrity Automatic detection and resolution of silent disk errors Virtual SAN Datastore Cluster wide setting (Default is on). Can be disabled on a per object basis using storage policies. Software checksum will enable customers to detect the corruptions that could be caused by hardware/software components including memory, drives, etc during the read or write operations. In case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection without end-to-end integrity checking. During the read/write operations VSAN will check for the validity of the data based on checksum. If the data is not valid then it should take the necessary steps to either correct the data or report it to the user to take action. These actions could be: Fetch the data from other copy of the data for RAID1, RAID5/6, etc. This is what we call recoverable data. If there is no valid copy of the data the error SHALL be returned This is what we call Non-recoverable errors Reporting: In case of errors the issues will be reported in the UI and logs. This will include impacted blocks and their associated VMs. A customer will be able to see the list of the VMs/Blocks that are hit by non-recoverable errors. A customer will be able to see the historical/trending errors on each drive CRC32 is the algorithm used (CPU offload support reduces overhead) There will be two level of scrubbing: Component level scrubbing: every block of each component is checked. If checksum mismatch, the scrubber tries to repair the block by reading other components. Object level scrubbing: for every block of the object, data of each mirror (or the parity blocks in RAID-5/6) is read and checked. For inconsistent data, mark all data in this stripe as bad. Repair can happen during normal I/O at DOM Owner or by scrubber. The repair path for mirror and RAID-5/6 are different. When checksum verification fails, the scrubber or DOM Owner will read the other copy of the data (or other data in the same stripe in case of RAID-5/6), rebuild the correct data and write it out to the bad location. End-to-end checksum of the data to prevent data integrity issues that could be caused by silent disk errors ( checksum is calculated and stored on the write path ) Detect silent corruptions when reading the data through checksum data When checksum verification fails, VSAN will read the other copy of the data (or other data in the same stripe in case of RAID-5/6), rebuild the correct data and write it out to the bad location It is based on 4K block size

33 Other new improvements
New in 6.2 Other new improvements Client Cache Write through read memory cache 0.4% of total host memory, up to 1GB per host “Local” to the virtual Machine Low overhead, big impact! Sparse Swap Reclaim Space used by memory swap Host advanced option enables setting policy for swap to no space reservation IOPS limit on object Policy driven capability Limit IOPS per VM/Virtual Disk Eliminate noisy neighbor issues Manage performance SLAs This will replace the 1MB cache lines used for read ahead, with a larger cache (.4% of host memory up to 1GB). Preliminary testing with VDI show some impressive numbers and this will compliment CBRC. Data locality will be used for the memory cache (as we do with CBRC) as this is a read only cache (so no need for network ACK). Memory latency is actually low enough for the latency to be a concern. 4KB granularity of cache. Sparse swap will be an advanced host level option (Swap is not managed by SPBM but the kernel). This will enable the reclaiming of space dedicated to memory. On a cluster with 256GB per host, this would yield TB’s of capacity savings at scale. This should benefit linked clone VDI storage utilization.

34 Enhanced Virtual SAN Management with New Health Service
New in 6.2 Enhanced Virtual SAN Management with New Health Service Performance Monitoring Capacity Monitoring Performance Monitoring Service allows from vCenter to be able to monitoring existing workloads. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally directly to 3rd party monitoring solutions by API. The Performance monitoring service runs on a distributed database that is stored on VSAN and NOT vCenter (will use up to ~255GB, which is why it will ask for a policy). Built-in performance monitoring Health and performance APIs and SDK Storage capacity reporting And many more health checks…

35 Performance, Scale and Availability for Any Application
New in 6.2 Performance, Scale and Availability for Any Application BUSINESS-CRITICAL APPLICATIONS SAP Horizon Oracle Work is being done on SAP HANA. This may not make launch, but PE is working with SAP on this. SAP Core apps are ready to be supported. “Horizon should be deployed with VSAN” Exchange DAG, Microsoft Always On as it was already is supported. PE team has put together some impressive transaction numbers for Oracle. SAP Core Ready Testing and validated deployments Tightly integrated cloud management Bundles Virtual SAN licenses for lowest cost VDI storage Oracle RAC supported Testing and validated deployments

36 Wrapping up

37 Of course we have a vision, and the vision isn’t too far out, it is just ahead

38 VMware Virtual SAN: Generic Object Storage Platform
Vision VMware Virtual SAN: Generic Object Storage Platform VMFS Block File Rest Virtual SAN We about to wrap up this session, I want to leave you with one more thing. VSAN is being extended to serve as a generic storage platform. One which in addition to the traditional virtualization use cases of VMs and VSCSI disks, VSAN can also serve storage though new abstractions: lightweight block drivers (perhaps using the NVMe protocol), files, and REST APIs. That’s storage that can be made available to individual hosts or be shared according to the protocol semantics across many hosts and application instances in the infrastructure. besides that VMware has been prototyping a distributed file system which leverages Virtual SAN as their core storage provider and serves storage capacity in an easy way and distributed fashion to thousands of clients. Yes the future is bright, and this is just the beginning. Icons: openstack – pivotal cloud foundry, nginx, mesos , docker With that I would (click) like to thank you and open the floor for questions VMware vSphere

39 Three Ways to Get Started with Virtual SAN Today
Online Hands-on Lab 1 FREE 2 Download Evaluation FREE VSAN Assessment 3 FREE Learn more… vmware.com/go/virtual-san Virtual SAN Product Overview Video Virtual SAN Datasheet Virtual SAN Customer References Virtual SAN Assessment VMware Storage Blog @vmwarevsan vmware.com/go/try-vsan-en vmware.com/go/try-vsan-en Reach out to your VMware Partner, SEs or Rep for a FREE VSAN Assessment Results in just 1 week! The VSAN Assessment tool collects and analyzes data from your vSphere storage environment and provides technical and business recommendations. Test-drive Virtual SAN right from your browser—with an instant Hands-on Lab Register and your free, self-paced lab is up and running in minutes 60-day Free Virtual SAN Evaluation VMUG members get a 6- month EVAL or 1-year EVALExperience for $200

40 With that I would (click) like to thank you and open the floor for questions


Download ppt "VMware Virtual SAN Hyper-converged infrastructure software"

Similar presentations


Ads by Google