Download presentation
Presentation is loading. Please wait.
Published byGordon Russell Modified over 9 years ago
1
VSP1700 VMware vSphere 5.0 Storage Features Name, Title, Company
2
2 Disclaimer This session may contain product features that are currently under development. This session/overview of the new technology represents no commitment from VMware to deliver these features in any generally available product. Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. Technical feasibility and market demand will affect final delivery. Pricing and packaging for any new technologies or features discussed or presented have not been determined.
3
3 Cloud Journey to IT Transformation – Accelerate and Amplify Infrastructure focusApplication focusBusiness focus Business ProductionIT as a Service Virtualization Low Governance High Governance Enterprise Hybrid Cloud IT Production
4
4 Agenda vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
5
5 Introduction Welcome to the New Storage Features in vSphere 5.0 session. VMware‘s storage goals in vSphere 5.0 Remove some of the complexity from managing storage in vSphere. How do we achieve these goals? Make storage objects much larger & more scalable, reducing the number that need to be managed by the customer Add additional features to the storage objects Help customers make the correct storage provisioning decision for Virtual Machines. Remove many time consuming & repetitive storage related tasks including the need for repetitive physical storage provisioning.
6
6 Current Issues with managing storage A vSphere administrator typically has the following storage tasks: 1. Determine the correct datastore on which to initially place a VM’s disk. 2. Continuously monitor datastores for space usage. 3. Continuously monitor datastores for performance/latency. 4. Repetitive physical LUN deployments as storage consumption grows. 5. Verifying that a VM stays on suitable storage throughout its lifecycle. Other concerns for a vSphere administrator : 1. Possible mistrust of Thin Provisioning due to Out Of Space situation. 2. Possible mistrust of disk usage reporting of Thin Provisioned LUNs. This presentation aims to show you how VMware is addressing the above issues with new storage features in vSphere 5.0.
7
7 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
8
8 VMFS-5 vs VMFS-3 Feature Comparison
9
9 VMFS-5 New Volume Creation (1 of 2) vSphere 5.0 provides the option of creating either VMFS-3 or VMFS-5. You can only create VMFS-5 on hosts that are running ESXi 5.0. Versions of ESX/ESXi earlier than 5.0 cannot use VMFS-5.
10
10 VMFS-5 New Volume Creation (2 of 2) Newly created VMFS-5 filesystems now use the GPT (GUID Partition Table) partition format. GPT is needed to enable the creation of volumes larger than 2TB. Previous versions use the MBR (Master Boot Record) partition format.
11
11 VMFS-3 to VMFS-5 Non-Disruptive Upgrade The Upgrade to VMFS-5 is clearly displayed in the vSphere Client. This is a non-disruptive upgrade, allowing VMs to remain running on the datastore while it is being upgraded. Best Practice: If you have the luxury of doing so, create a brand new VMFS-5 datastore, and use Storage vMotion to move your VMs to it.
12
12 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
13
13 VAAI Improvements vSphere Storage APIs for Array Integration (VAAI) offloads certain storage related tasks to the storage array rather than doing them in the VMkernel. This provides improved performance. Enhancements to VAAI in vSphere 5.0 are in two areas: NAS Thin Provisioning
14
14 VAAI NAS Primitives With the NAS primitives, we give NAS datastores many of the hardware acceleration/offload features that we introduced for VMFS datastores (iSCSI, FC) in vSphere 4.1. The following primitives are defined for VAAI NAS: Full File Clone - Similar to the VMFS block cloning. This primitive allows offline VMDKs to be cloned by the array. Reserve Space - Allows creation of thick VMDK files on NAS devices. Fast File Clone – Allows creation of linked clone VMs to be offloaded to the array [only accessible by VMware View currently].
15
15 VAAI NAS Primitives: Thick Disk Creation Without the VAAI NAS primitives, only the Thin format could be selected when creating Virtual Disks on NFS datastores. With the VAAI NAS primitives in vSphere 5.0, Lazy Zeroed, Eager Zeroed and Thin formats are all available when creating a virtual disk on an NFS datastore. No VAAI VAAI NAS
16
16 VAAI Thin Provisioning Primitives As mentioned in the introduction, VMware wants to make the act of physical storage provisioning in a vSphere environment extremely rare. Our vision is that datastores should be incredibly large address spaces & should be able to handle any VM workload. Thin Provisioning is a mechanism which can be used to achieve these goals. However the use of Thin Provisioning creates 2 new problems: Dead Space accumulation. Out-of-space conditions.
17
17 VAAI Thin Provisioning - Dead Space Reclamation Dead space is previously written blocks that are no longer used, for instance, after a Storage vMotion operation on a VM. Through VAAI, storage system can now reclaim the dead blocks Storage vMotion, VM deletion & swap file deletion trigger the Thin Provisioned LUN to free physical space. Storage Arrays can now report correctly on used space on Thin Provisioned LUNs, making the forecasting for new storage easier. VMware VMFS volume A VMFS volume B Storage vMotion VM’s file data blocks will be released through VAAI
18
18 Can I reclaim space on existing VMFS volumes? What if I already have an existing VMFS-3 volume which is already Thin Provisioned? Can I reclaim the dead space from it? Yes. Upgrade to VMFS-5, and then run the following command: # cd /vmfs/volumes/thin_volume # vmkfstools -y 99 Attempting to reclaim 99% of free capacity on VMFS-5 file system ‘thin_volume’ Done.
19
19 ‘Out Of Space’ User Experience with VAAI Extensions VMware Space exhaustion, affected VMs paused, LUN online & awaiting space allocation. Space exhaustion warning in UI surfaced by VAAI Storage vMotion VM’s disks or Admin adds more space to the datastore
20
20 VAAI Thin Provisioning Benefits VAAI Thin Provisioning solves the previous two problems of dead space & out of space conditions. Dead space reclamation Informs the array about the datastore space that is freed when files are deleted or removed from the datastore by Storage vMotion. The array can then reclaim the freed blocks of space. Monitoring of the space usage on Thin Provisioned datastores Avoids running out of physical space. A new advanced warning has been added to vSphere 5.0 for Thin Provisioned out of space condition & there is now a gradual degradation of service (VM pause) on disk full condition. Customers should now feel more comfortable in creating & using very large Thin Provisioned datastores.
21
21 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness (VASA) Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
22
22 What are VMware vStorage APIs for Storage Awareness? vSphere Storage APIs – Storage Awareness (VASA) allows storage arrays to integrate with vCenter for management functionality via plug- ins called Storage Providers (which are developed by the storage array vendors). This in turn allows a vCenter administrator to be aware of the capabilities of the physical storage devices.
23
23 What are the benefits of VASA? For the first time VMware has an end to end storage story. 1. Storage array informs vendor provider of capabilities. 2. The vendor provider informs vCenter about these capabilities. 3. If device is in vCenter inventory, device is tagged with capabilities. 4. Administrators now see storage device capabilities from the vSphere client. 5. Administrators no longer need to maintain large spreadsheets of storage device characteristics, use complex naming conventions for datastores or engage their SAN administrator in order to be sure that their VMs are being provisioned to the correct storage type. The visibility of the storage capabilities is also a significant enabler for the next feature that we are about to discuss, Profile Driven Storage.
24
24 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
25
25 Profile Driven Storage vSphere 5.0 introduces another feature called Profile Driven Storage. Benefits Profile Driven Storage makes the initial placement of a VM error free. This feature also enables VMs to remain compliant with their pre-defined storage requirements. Administrators create profiles which contain storage characteristics. These storage characteristics can be surfaced via VMware Storage APIs – Storage Awareness or can be user-defined business tags (e.g. gold, silver, bronze). A VM can be regularly checked for compliance to ensure that the storage on which it is deployed has the correct & necessary storage capabilities.
26
26 Storage Capabilities & Profile Driven Storage Storage Capabilities surfaced by Storage Awareness APIs or user-defined xxx VM Storage Profile references Storage Capabilities VM Storage Profile associated with VM Compliant – VM is on datastore with correct storage capabilities Non-Compliant – VM is on datastore with incorrect storage capabilities
27
27 Profile Driven Storage Compliance Policy Compliance is visible from the Virtual Machine Summary tab or from VM Storage Profiles.
28
28 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
29
29 Storage DRS Benefits VM deployment without Storage DRS: Manually identify the datastore with the most available disk space Manually validate that the latency threshold hasn’t been reached Manually ensure that there are no conflicts between other virtual machines placed on the same datastore Or ignore all that, create Virtual Machine & hope for the best VM deployment with Storage DRS: Automatic selection of the best datastore for your initial VM placement Avoiding hotspots, disk space imbalances & I/O imbalances Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems Smart Placement Rules Helps in avoiding placing VMs with a similar task on the same datastore Helps in keeping virtual machines together when required
30
30 Datastore Cluster An integral part of SDRS is to create a group of datastores called a datastore cluster. Datastore Cluster without Storage DRS – Simply a group of datastores. Datastore Cluster with Storage DRS - Load Balancing domain similar to a DRS Cluster, but for storage. A datastore cluster without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than that. Datastore Cluster 2TB Datastores 500GB
31
31 Storage DRS Operations – Initial Placement When creating a VM, you now select a datastore cluster rather than an individual datastore - let SDRS choose the appropriate datastore. SDRS will select a datastore based on space utilization and I/O load. By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastores. Datastore Cluster 2TB Datastores 500GB 300GB available 260GB available 265GB available 275GB available
32
32 Storage DRS Operations – Load Balancing SDRS triggers on space usage & latency threshold. Algorithm issues migration recommendations when I/O response time and/or space utilization thresholds have been exceeded, AND there is a significant I/O or space imbalance. Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds. datastore cluster 2TB datastores 500GB 10ms latency 5ms latency 11ms latency 40ms latency
33
33 Storage DRS Operations - Datastore Maintenance Mode Evacuates all VMs & VMDKs from selected datastore. If SDRS is automatic, SDRS will use Storage vMotion If SDRS is manual, administrator has to migrate VMs. Datastore cluster 2TB Datastores VOL1VOL2VOL3VOL4 Place VOL1 in maintenance mode
34
34 Storage DRS Operations - Rules Datastore Cluster Intra-VM VMDK affinity Keep a Virtual Machine’s VMDKs together on the same datastore Maximize VM availability when all disks needed in order to run On by default for all VMs VMDK anti-affinity Keep a VM’s VMDKs on different datastores Useful for separating log and data disks of database VMs Can select all or a subset of a VM’s disks Datastore Cluster VM anti-affinity Keep VMs on different datastores Similar to DRS anti- affinity rules Maximize availability of a set of redundant VMs Datastore Cluster
35
35 Storage DRS Operations – Recommendations When SDRS is in manual mode, recommendations are displayed in the Storage DRS tab: Recommendations display Space Utilization (before & after) for source & destination datastores as well as the current latency values of the source and destination.
36
36 Profile Driven Storage & Storage DRS Storage DRS and Profile Driven Storage can be used together when multiple types (tiers) of datastores exist in your infrastructure. If all datastores in a datastore cluster have the same storage capabilities, and those capabilities match the VM Storage Profile, then the datastore cluster will be marked as compatible when it comes to choosing an appropriate datastore for a VM deployment.
37
37 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
38
38 Storage vMotion Enhancements In vSphere 5.0, a number of new enhancements were made to Storage vMotion. Storage vMotion now supports the relocation Virtual Machines that have snapshots & linked clones. Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Maintenance Mode & Load Balancing (both space & performance). In vSphere 5.0, Storage vMotion uses a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork writes to both source and destination using mirror mode. This means migrations can be done is a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism & should mean more predictable (and shorter) migration time.
39
39 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
40
40 Software FCoE Adapter vSphere 5.0 introduces a new software FCoE adapter. A software FCoE adapter is software code that performs some of the FCoE processing. This adapter can be used with a number of NICs that support partial FCoE offload. Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.
41
41 You are here vSphere 5.0 New Storage Features vSphere Storage APIs - Storage Awareness Introduction VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
42
42 vSphere Storage Appliance Introduction In vSphere 5.0, VMware release a new storage appliance called VSA. VSA is an acronym for “vSphere Storage Appliance”. This appliance is aimed at our SMB (Small to Mid-Size Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. Without access to a SAN or NAS array, SMB customers are unable to implement many of vSphere’s core technologies, such as vSphere HA & vMotion. Benefits Low Cost Easy to deploy Highly Resilient Enabler for vMotion & vSphere HA
43
43 vSphere Storage Appliance Configuration Each ESXi server has a VSA deployed to it as a Virtual Machine. The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures. vSphere VSA NFS vSphere Client VSA Manager
44
44 Conclusion vSphere 5.0 has many new compelling storage features to help reduce the complexity of managing storage for administrators. VMware Storage APIs for Storage Awareness (VASA) surfacing storage characteristics into vCenter. A VM’s storage compliance can be checked throughout its lifetime via Profile Driven Storage. Datastores are much larger than ever before & can contain many more virtual machines due to VAAI enhancements and architectural changes. Features such as Storage DRS & Profile Driven Storage will help solve traditional problems with virtual machine initial placement, and avoiding datastore usage & hot spots. The vSphere Storage Appliance (VSA), a low cost, highly available and easy to deploy storage appliance, now provides shared storage for everyone.
45
45 Questions? http://blogs.vmware.com/vSphere/Storage @VMwareStorage
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.