Download presentation
Presentation is loading. Please wait.
Published byAxel Jersey Modified over 10 years ago
1
VSP1700 VMware vSphere 5.0 Storage Features
2
2 Agenda vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
3
3 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
4
4 VMFS-5 vs VMFS-3 Feature Comparison
5
5 VMFS-5 New Volume Creation (1 of 2) vSphere 5.0 provides the option of creating either VMFS-3 or VMFS-5. You can only create VMFS-5 on hosts that are running ESXi 5.0. Versions of ESX/ESXi earlier than 5.0 cannot use VMFS-5.
6
6 VMFS-5 New Volume Creation (2 of 2) Newly created VMFS-5 filesystems now use the GPT (GUID Partition Table) partition format. GPT is needed to enable the creation of volumes larger than 2TB. Previous versions use the MBR (Master Boot Record) partition format.
7
7 VMFS-3 to VMFS-5 Non-Disruptive Upgrade The Upgrade to VMFS-5 is clearly displayed in the vSphere Client. This is a non-disruptive upgrade, allowing VMs to remain running on the datastore while it is being upgraded. Best Practice: If you have the luxury of doing so, create a brand new VMFS-5 datastore, and use Storage vMotion to move your VMs to it.
8
8 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
9
9 VAAI Improvements vSphere Storage APIs for Array Integration (VAAI) offloads certain storage related tasks to the storage array rather than doing them in the VMkernel. This provides improved performance. Enhancements to VAAI in vSphere 5.0 are in two areas: NAS Thin Provisioning
10
10 VAAI NAS Primitives With the NAS primitives, we give NAS datastores many of the hardware acceleration/offload features that we introduced for VMFS datastores (iSCSI, FC) in vSphere 4.1. The following primitives are defined for VAAI NAS: Full File Clone - Similar to the VMFS block cloning. This primitive allows offline VMDKs to be cloned by the array. Reserve Space - Allows creation of thick VMDK files on NAS devices. Fast File Clone – Allows creation of linked clone VMs to be offloaded to the array [only accessible by VMware View currently].
11
11 VAAI NAS Primitives: Thick Disk Creation Without the VAAI NAS primitives, only the Thin format could be selected when creating Virtual Disks on NFS datastores. With the VAAI NAS primitives in vSphere 5.0, Lazy Zeroed, Eager Zeroed and Thin formats are all available when creating a virtual disk on an NFS datastore. No VAAI VAAI NAS
12
12 VAAI Thin Provisioning Primitives As mentioned in the introduction, VMware wants to make the act of physical storage provisioning in a vSphere environment extremely rare. Our vision is that datastores should be incredibly large address spaces & should be able to handle any VM workload. Thin Provisioning is a mechanism which can be used to achieve these goals. However the use of Thin Provisioning creates 2 new problems: Dead Space accumulation. Out-of-space conditions.
13
13 VAAI Thin Provisioning - Dead Space Reclamation Dead space is previously written blocks that are no longer used, for instance, after a Storage vMotion operation on a VM. Through VAAI, storage system can now reclaim the dead blocks Storage vMotion, VM deletion & swap file deletion trigger the Thin Provisioned LUN to free physical space. Storage Arrays can now report correctly on used space on Thin Provisioned LUNs, making the forecasting for new storage easier. VMware VMFS volume A VMFS volume B Storage vMotion VMs file data blocks will be released through VAAI
14
14 Can I reclaim space on existing VMFS volumes? What if I already have an existing VMFS-3 volume which is already Thin Provisioned? Can I reclaim the dead space from it? Yes. Upgrade to VMFS-5, and then run the following command: # cd /vmfs/volumes/thin_volume # vmkfstools -y 99 Attempting to reclaim 99% of free capacity on VMFS-5 file system thin_volume Done.
15
15 Out Of Space User Experience with VAAI Extensions VMware Space exhaustion, affected VMs paused, LUN online & awaiting space allocation. Space exhaustion warning in UI surfaced by VAAI Storage vMotion VMs disks or Admin adds more space to the datastore
16
16 VAAI Thin Provisioning Benefits VAAI Thin Provisioning solves the previous two problems of dead space & out of space conditions. Dead space reclamation Informs the array about the datastore space that is freed when files are deleted or removed from the datastore by Storage vMotion. The array can then reclaim the freed blocks of space. Monitoring of the space usage on Thin Provisioned datastores Avoids running out of physical space. A new advanced warning has been added to vSphere 5.0 for Thin Provisioned out of space condition & there is now a gradual degradation of service (VM pause) on disk full condition. Customers should now feel more comfortable in creating & using very large Thin Provisioned datastores.
17
17 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
18
18 Profile Driven Storage vSphere 5.0 introduces another feature called Profile Driven Storage. Benefits Profile Driven Storage makes the initial placement of a VM error free. This feature also enables VMs to remain compliant with their pre-defined storage requirements. Administrators create profiles which contain storage characteristics. These storage characteristics can be surfaced via VMware Storage APIs – Storage Awareness or can be user-defined business tags (e.g. gold, silver, bronze). A VM can be regularly checked for compliance to ensure that the storage on which it is deployed has the correct & necessary storage capabilities.
19
19 Storage Capabilities & Profile Driven Storage Storage Capabilities surfaced by Storage Awareness APIs or user-defined xxx VM Storage Profile references Storage Capabilities VM Storage Profile associated with VM Compliant – VM is on datastore with correct storage capabilities Non-Compliant – VM is on datastore with incorrect storage capabilities
20
20 Profile Driven Storage Compliance Policy Compliance is visible from the Virtual Machine Summary tab or from VM Storage Profiles.
21
21 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
22
22 Storage DRS Benefits VM deployment without Storage DRS: Manually identify the datastore with the most available disk space Manually validate that the latency threshold hasnt been reached Manually ensure that there are no conflicts between other virtual machines placed on the same datastore Or ignore all that, create Virtual Machine & hope for the best VM deployment with Storage DRS: Automatic selection of the best datastore for your initial VM placement Avoiding hotspots, disk space imbalances & I/O imbalances Advanced balancing mechanism to avoid storage performance bottlenecks or out of space problems Smart Placement Rules Helps in avoiding placing VMs with a similar task on the same datastore Helps in keeping virtual machines together when required
23
23 Datastore Cluster An integral part of SDRS is to create a group of datastores called a datastore cluster. Datastore Cluster without Storage DRS – Simply a group of datastores. Datastore Cluster with Storage DRS - Load Balancing domain similar to a DRS Cluster, but for storage. A datastore cluster without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than that. Datastore Cluster 2TB Datastores 500GB
24
24 Storage DRS Operations – Initial Placement When creating a VM, you now select a datastore cluster rather than an individual datastore - let SDRS choose the appropriate datastore. SDRS will select a datastore based on space utilization and I/O load. By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastores. Datastore Cluster 2TB Datastores 500GB 300GB available 260GB available 265GB available 275GB available
25
25 Storage DRS Operations – Load Balancing SDRS triggers on space usage & latency threshold. Algorithm issues migration recommendations when I/O response time and/or space utilization thresholds have been exceeded, AND there is a significant I/O or space imbalance. Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds. datastore cluster 2TB datastores 500GB 10ms latency 5ms latency 11ms latency 40ms latency
26
26 Storage DRS Operations - Datastore Maintenance Mode Evacuates all VMs & VMDKs from selected datastore. If SDRS is automatic, SDRS will use Storage vMotion If SDRS is manual, administrator has to migrate VMs. Datastore cluster 2TB Datastores VOL1VOL2VOL3VOL4 Place VOL1 in maintenance mode
27
27 Storage DRS Operations - Rules Datastore Cluster Intra-VM VMDK affinity Keep a Virtual Machines VMDKs together on the same datastore Maximize VM availability when all disks needed in order to run On by default for all VMs VMDK anti-affinity Keep a VMs VMDKs on different datastores Useful for separating log and data disks of database VMs Can select all or a subset of a VMs disks Datastore Cluster VM anti-affinity Keep VMs on different datastores Similar to DRS anti- affinity rules Maximize availability of a set of redundant VMs Datastore Cluster
28
28 Storage DRS Operations – Recommendations When SDRS is in manual mode, recommendations are displayed in the Storage DRS tab: Recommendations display Space Utilization (before & after) for source & destination datastores as well as the current latency values of the source and destination.
29
29 Profile Driven Storage & Storage DRS Storage DRS and Profile Driven Storage can be used together when multiple types (tiers) of datastores exist in your infrastructure. If all datastores in a datastore cluster have the same storage capabilities, and those capabilities match the VM Storage Profile, then the datastore cluster will be marked as compatible when it comes to choosing an appropriate datastore for a VM deployment.
30
30 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion Software FCoE vSphere Storage Appliance
31
31 Storage vMotion Enhancements In vSphere 5.0, a number of new enhancements were made to Storage vMotion. Storage vMotion now supports the relocation Virtual Machines that have snapshots & linked clones. Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Maintenance Mode & Load Balancing (both space & performance). In vSphere 5.0, Storage vMotion uses a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork writes to both source and destination using mirror mode. This means migrations can be done is a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism & should mean more predictable (and shorter) migration time.
32
32 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
33
33 Software FCoE Adapter vSphere 5.0 introduces a new software FCoE adapter. A software FCoE adapter is software code that performs some of the FCoE processing. This adapter can be used with a number of NICs that support partial FCoE offload. Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.
34
34 You are here vSphere 5.0 New Storage Features VMFS-5 Profile Driven Storage Storage DRS vSphere Storage APIs - Array Integration – Phase 2 Storage vMotion vSphere Storage Appliance Software FCoE
35
35 vSphere Storage Appliance Introduction In vSphere 5.0, VMware release a new storage appliance called VSA. VSA is an acronym for vSphere Storage Appliance. This appliance is aimed at our SMB (Small to Mid-Size Business) customers who may not be in a position to purchase a SAN or NAS array for their virtual infrastructure, and therefore do not have shared storage. Without access to a SAN or NAS array, SMB customers are unable to implement many of vSpheres core technologies, such as vSphere HA & vMotion. Benefits Low Cost Easy to deploy Highly Resilient Enabler for vMotion & vSphere HA
36
36 Conclusion vSphere 5.0 has many new compelling storage features to help reduce the complexity of managing storage for administrators. VMware Storage APIs for Storage Awareness (VASA) surfacing storage characteristics into vCenter. A VMs storage compliance can be checked throughout its lifetime via Profile Driven Storage. Datastores are much larger than ever before & can contain many more virtual machines due to VAAI enhancements and architectural changes. Features such as Storage DRS & Profile Driven Storage will help solve traditional problems with virtual machine initial placement, and avoiding datastore usage & hot spots. The vSphere Storage Appliance (VSA), a low cost, highly available and easy to deploy storage appliance, now provides shared storage for everyone.
37
37 Questions? http://blogs.vmware.com/vSphere/Storage @VMwareStorage
38
VSP1700 VMware vSphere 5.0 Storage Features
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.