Download presentation
Presentation is loading. Please wait.
1
EMC + VMware + VNX/VMAX Matt Cowger PSE – VCDX 52
This presentation is part of a series of PPTs that cover EMC’s Technology Direction. They are intended to give the audience an idea of where we see technology trends moving and give some forward indication of how EMC is investing to help drive the significant technology shifts that will give our customers competitive advantages in their markets. Matt Cowger PSE – VCDX 52
2
Current Report: http://bit.ly/1eQSWvp
Integration points Details and comparation Current Report: According to a Wikibon survey conducted in 2013, EMC with VNX was voted best product for VMware integration, based on two factors—the importance of the integration from a customer’s perspective, and the quality of the integration. EMC VNX led the way in General Storage, Unified Storage and Block-only Storage. One of the most importatn thigs that EMC has to offer to virtualized/Vmware environments is the high grade of integrations points, making our storage platafor the best suited of all on this environements Wikibon has conducted a very technical and deep study about the amount, richeness and functionality of the integration points that several manufacturers provide in their storage products and for third year in a row EMC has been clasified as number one IMPORTANT the total amount integration points covered are for several of our Product Lines, included VMAX, VNX, Isilon, and even Iomega, please review the Wikibon material for futher details
3
VMware Storage APIs APIs are a “family”
vStorage API for Array Integration (VAAI) vStorage API for Storage Awareness (VASA) vStorage API for Multipathing vStorage API for Data Protection vStorage API for Site Recovery Manager This is a set of VMware API’s that EMC has heavily leveraged to achieve deep integration into the VMW stack. This whole presenattion takes a deeper look into vStorage API for Array Integration (VAAI) vStorage API for Storage Awareness (VASA) vStorage API for Multipathing Data Protection should be covered in BRS modules Site Recovery Manager is out of scope
4
vStorage APIs for Array Integration
VAAI Write Same/Zero What: 10x less IO for common tasks How: Eliminating redundant and repetitive write commands – just tell the array to repeat via SCSI commands Fast/Full Copy What: 10x faster VM deployment, clone, snapshot, and Storage VMotion How: leveraging array ability to mass copy, snapshot, and move blocks via SCSI commands UNMAP What: Recover Thin Provisioned Space How: informing array that blocks may be discarded VAAI (VMware API for Array Integration) are a set of PRIMITIVES implemented at the very array level in order to off loads storage related activities from the ESX server to the array system VAAI Thin Provisioning (block) Thin Provisioning Stun (block) Full Clone (file) Extended Statistics (file) Space Reservations (file) Hardware Accelerated Locking (block) Hardware Accelerated Zero (block) Hardware Accelerated Copy (block) Hardware Offloaded Locking What: 10x more VMs per datastore How: stop locking LUNs and start only locking blocks. Thin Provisioning Stun What: Never have an out-of-space disaster How: reporting array TP state to ESX
5
vStorage APIs for Storage Awareness
VASA VASA is an Extension of the vSphere Storage APIs, vCenter- based extensions. It exposes array details via server-side plug- ins or Vendor Providers. Tells the vCenter administrator : “This datastore is protected with RAID 5, replicated with a 10 minute RPO, snapshotted every 15 minutes, and is compressed and deduplicated”. This functionality helps drive the concept of Profile Driven Storage That is where our new API comes in to play. vStorage APIs for Storage Awareness (VASA) is a new set of APIs which will enable vCenter to see the capabilities of the storage array LUNs/datastores, making it much easier to select the appropriate disk for virtual machine placement. Storage capabilities, such as RAID level, Thin or Thick Provisioned, Replication State and much more can now be made visible within vCenter. VASA eliminates the need for maintaining massive spreadsheets detailing the storage capabilities of each LUN needed to guarantee the correct SLA to virtual machines. VASA VMware-defined API to display storage information through vCenter Designed for heterogeneous storage environments Subset of EMC Virtual Storage Integrator (VSI) plug-ins for VNX Provides the VMware Administrator with visibility into basic storage components Arrays Storage Processors, I/O Ports LUNS Facilitates intelligent conversations between Storage and VMware administrators
6
How VASA Works VASA allows a storage vendor to develop a software component called a VASA provider for its storage arrays. A VASA provider gets information from the storage array about available storage topology, capabilities, and state vCenter Server 5.x vSphere Client This slides provide a more deep picture about the way VASA works VASA VMware-defined API to display storage information through vCenter Designed for heterogeneous storage environments Subset of EMC Virtual Storage Integrator (VSI) plug-ins for VNX Provides the VMware Administrator with visibility into basic storage components Arrays Storage Processors, I/O Ports LUNS Facilitates intelligent conversations between Storage and VMware administrators VASA Provider EMC storage The vCenter Server connects to a VASA Provider. Information from the VASA provider is displayed in the vSphere Client.
7
Storage Policy Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles displays the storage capabilities from the Vendor Provider. With VASA, storage vendors can provide vShere with information about the storage environment. It enables tighter integration between storage and the virtual infrastructure. Information about storage health status, configuration info, capacity and thin provisioning info etc For the first time we have an end to end story, i.e. storage array informs VASA storage provider of capabilities & then the storage provider informs vCenter, so now users can see storage array capabilities from vSphere client. Through the new VM Storage Profiles, these storage capabilities can then be displayed in vCenter to assist administrators in choosing the right storage in terms of space, performance and SLA requirements. This information enables the administrator to take the appropriate actions based on health & usage information.
8
VM Storage Profile Compliance
Once the properties of a storage system are known and the properties required for a given virtual machine are understood, we can take these properties and assign them to a required set of information for the VM. For example, this virtual machine requires gold-level storage and has been placed on a gold-level storage system, as defined by VASA. In fact, you can see here that the system is in compliance with its profile, the little green dot down there. If it were not in compliance, it would be red, and we could actually even prevent an administrator from moving a VM onto non-compliant storage, thus potentially preventing a downtime or other negative scenario.
9
vSphere 5 Support* VAAI (VMware API for Array Integration)
off loads storage related activities from the ESX server to the VNX system VASA (VMware API for Storage Awareness) provides vSphere with system configurations data to support automated policy based provisioning VAAI is a storage integration feature that increases virtual machine scalability. VAAI consists of a set of APIs that allows vSphere to offload specific host operations to EMC storage arrays. These are supported with VMFS and RDM volumes. Note that you must have ALUA mode or failovermode=4 configured VAAI 2 Includes Block enhancements and File support Similarly VASA describes the storage system configuration and capabilities to vCenter permitting vCenter services such as SDRS to perform automated policy based provisioning. As of VNX OE for File V7.1 and OE for Block R32, VASA is now natively supported on the array and VASA also now supports NFS. * Requires VNX OE File V7.0.3x+ or VNX OE Block R31.5+ ** Currently available as Tech Preview in View 5.1
10
Virtualization Management
Virtual Storage Integrator (VSI) Integrated point of control to simplify and speed VMware storage management tasks One unified storage tool for all VMAX, CLARiiON, Celerra, VNX series, and VNXe series EMC Virtual Storage Integrator EMC also offers a universal storage plug-in that integrates the vCenter Server plug-ins for all of EMC arrays into one software product. This enables VMware administrators to provision and protect their VMware storage across EMC Symmetrix, CLARiiON, Celerra, VNX series, and VNXe series platforms, delivering an integrated point of control to simplify and speed VMware storage management tasks. Unified storage
11
Empowering the VMware Administrator
Virtual Storage Integrator (VSI) vCenter Plug-in Provision storage within vCenter Configure FAST-VP Tiering Policies Provision for VDI (View & XenDesktop) EMC Virtual Storage Integrator (ESI) is targeted towards the VMware administrator. Typically, the VMware administrator does not have conversations with the storage admin, hence the need for a tool like VSI to bridge that gap. VSI supports VNX provisioning within vCenter, full visibility to physical storage and increases management efficiency. VMware administrators can utilize VSI to: Provision VNX/VNXe storage Create VMFS and NFS datastores RDM volumes Access Control Utility support For VNX (Block) Integrated support for Virtual Provisioning Configure FAST-VP Tiering Policies Clone virtual machines (VNX File) FullClone, FastClone Provision/clone and refresh clone VMs to VMware View Compress virtual machines (VNX File) Reduce the storage space for VMs through Deduplication and Compression Uncompress VMs on demand Leverage storage efficiencies via compression and fast clone technology Supports VMAX, VMAXe, VNX, VNXe and VPLEX Troubleshoot through storage layer End to end mapping Physical to virtual Increase management efficiency Monitor and manage from vCenter Same views and controls 11 11
12
Virtual Storage Integrator (VSI) 5.6
VSI vSphere Plug-in EMC plug-in for vCenter solutions Plug-in Framework for VMware vSphere Consists of Several Optional Features Unified Storage Management Provision XtremIO, VPLEX, VMAX, VNX, VNXe, Celerra, and CLARiiON (De)Compress and clone VMs on VNX/VNXe File Native VM Clone cross VMFS datastores Storage Viewer View storage mapping and connectivity details for XtremIO, VMAX, CLARiiON, Celerra, VNX, VNXe, and VPLEX devices Path Management Change multipath policy on devices based on both storage class and virtualization objects RecoverPoint Management Control and view EMC RecoverPoint and VMware SRM related objects The VSI plugin consists of several optional features, on top of the simple storage viewer functionality that’s offered by default. We offer a unified storage management option, enabling customers to provision their storage from multiple different systems, a storage viewer that simply provides storage mapping and connectivity details, a path management plugin that allows customers to change the multi-path policies on devices, based on both storage class and their virtualization objects. This, in particular, has a lot of value, as it fills a gap that exists within VMware today. We can also support recover point management, allowing customers to understand their replication options right from within the vCenter interface.
13
VSI 5.6: Unified Storage Management
Provision VNX/VNXe Storage – Create VMFS Datastores Create RDM Volumes Create NFS Datastores Extend VMFS/NFS Datastores Configure FAST-VP Tiering Policies Legacy support for Celerra and CLARiiON Clone Virtual Machines Full Clone: Array-accelerated Copy on NFS Fast Clone: Space-optimized Copy on NFS enabled storage Native Clone: Array-accelerated Copy on VMFS for VAAI – Provision Cloned VMs to VMware View and Citrix XenDesktop – Refresh Desktops in VMware View and Citrix XenDesktop (De)Compress and (Re)Deduplicate Virtual Machines block Compression and block Deduplication Reduce the Storage Space for VMs through VNX/e File and Decompress VMs on Demand Unified Storage Management To go a bit further on the unified storage management, you’ll see that we can enable users to create any kind of data store, including NFS, RDMs or VMFS data stores, on supported systems. We can do full clones and fast clones and fully native clones for VAI- enabled storage, and we can decompress and de or re-duplicate virtual machines on relevant storage systems. Increase storage efficiency while maintaining operational integrity
14
VSI 5.6: Storage Viewer Unisphere VM-awareness Automatic discovery of virtual machines and ESX servers End-to-end virtual-to-physical mapping Automated virtual infrastructure reporting To help address managing the complexity gap, EMC offers virtualization-aware Unisphere. Unisphere’s virtualization-aware capabilities deliver simple automation to enable end-to- end virtual-to-physical mapping of physical storage and servers, as well as VMware ESX servers and the virtual machines that reside on them. Through advanced search capabilities, users can quickly identify a particular virtual machine and ensure that the appropriate amount of storage is allocated to it. The ability to automate management and discovery of virtual environments from the array is a unique capability for CLARiiON and demonstrates EMC’s commitment to helping users maximize the value of their storage infrastructure. DN
15
Demo: http://www.youtube.com/watch?v=r8HqAdpE91k
VSI 6.0 for Flex Client New Style Web Interface Current Support: ViPR “Soon” Support VNX VMAX Actually Vmware is moving out from Windows Client interfaces on the vCenter side, and developing a whole new set of WEB inetrafces for management, EMC is starting to take this new WEB development approach in the development of its integartion management interfaces for tehir products, futhure versions of VSI will be more vCenter WEB Client oriented. Demo:
16
vStorage API for Multipathing
Complexity of mapping active/standby channels Manually load-balance I/O-intensive VMs Added complexity of VM mobility ESX Server Single Channel Failover Only Storage Ports NATIVE MULTIPATHING Provides Only Channel Failover ESX Server All Channels are Active and Used to Load Balance PowerPath Storage Ports POWERPATH/VE Failover and Load Balancing Simplifies channel management in VMware environments Uses all channels for load balancing and failover Constantly adjusts I/O path usage PowerPath/VE also works in Hyper-V environments. VMware NMP: Complex to map active/standby channels for hundreds to thousands of VMs Manually load balance I/O-intensive VMs to maintain performance and avoid impact on other VMs Added complexity of VM mobility (via VMware vMotion, Distributed Resource Scheduler, High Availability, etc.) PowerPath/VE: Simplifies channel management in VMware environments Leverages all channels for load balancing and failover for high and predictable performance levels Constantly adjusts I/O path usage and changes in I/O loads from VMs
17
PowerPath/VE Increased Application Performance and Availability PowerPath/VE’s intelligent load balancing recognizes that the paths are not equal PowerPath/VE redirects more I/O to the less busy paths Optimizing the I/O paths results in overall greater throughput for the PowerPath/VE host MPIO with Round Robin continues to use all equally resulting in longer I/O completion times and less throughput PowerPath/VE compared to VMware Native Multipathing PowerPath/VE is the industries leading multipathing solution which uses patented algorithms to intelligently and efficiently balance loads across VM’s while also orchestrating path failover and failback for added resilience. PowerPath/VE decreases latency and increases resilience for better application availability in a growing virtual environments. In a typical SAN configuration, large or small, paths will rarely be perfectly balanced. PowerPath/VE’s intelligent load balancing recognizes that the paths are not equal. It redirects more I/O to less busy paths while maintaining statistics on all of them. Optimizing the I/O paths results in greater throughput. By avoiding the busy paths, PowerPath/VE can get I/Os completed more quickly. Round Robin continues to use all paths equally. PowerPath/VE will reroute I/Os while Round Robin doesn’t recognize the difference among paths states. Source: ESG Lab: EMC PowerPath/VE - Automated Path Optimization for VMware Virtual Environments, April 2012
18
Path Management PowerPath/VE is strongly recommended for vSphere environment Avoid POC that do not represent real environments NMP policy is available with vSphere Use Round Robin policy for Symmetrix & VNX arrays esxcli nmp satp setdefaultpsp -P VMW_PSP_RR -s VMW_SATP_SYMM Set IOPS parameter See VNX / VMAX Techbook for exact command As a result of these benefits, we strongly recommend PowerPath/VE for nearly every vSphere environment. In order to sell it effectively, we recommend that you avoid POCs that do not represent a real environment. Simply installing PowerPath/VE on an unused cluster with only, say, four or eight hosts, that’s running against an array performing no other workload, will show no real significant environment. We strongly recommend that you encourage your customers to try it on their real environment for 30, 60, 90, even, days, to make sure that they understand the full value and benefit. That being said, we do fully support the native multipather policy that is available within vSphere. We strongly recommend that customers use the round robin policy, which is not always the default, for all of our arrays, and that they set the IOPS parameter to 1. This enables the best possible balancing that the round robin algorithm is capable of although, as mentioned before, it’s not as capable as the PowerPath/VE algorithms. See the VNX and vMax tech books for the exact command used to enable IOPS Parameter 1 and setting the round robin policy for all systems.
19
Storage DRS
20
Datastore Cluster A group of datastores called a “datastore cluster”
Think: Datastore Cluster - Storage DRS = Simply a group of datastores (like a datastore folder) Datastore Cluster + Storage DRS = resource pool analagous to a DRS Cluster. Datastore Cluster + Storage DRS + Profile-Driven Storage = nirvana 2TB Datastore clusters form the basis of Storage DRS. A datastore cluster is a collection of datastores aggregated in to a single unit of consumption from an administrators perspective. When a datastore cluster is created, Storage DRS can manage the storage resources comparable to how DRS manages compute resources in a cluster. As with a cluster of hosts, a datastore clusters is used to aggregate storage resources, enabling smart and rapid placement of new virtual machines and virtual disk drives and load balancing of existing workloads The diagram actually shows this in a nice way. When you create a VM you will be able to select a Datastore Cluster as opposed to individual LUNs datastore cluster 500GB 500GB 500GB 500GB datastores
21
Storage DRS – Initial Placement
Initial Placement – VM/VMDK create/clone/relocate. When creating a VM you select a datastore cluster rather than an individual datastore SDRS recommends a datastore based on space utilization and I/O load. By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines. datastore cluster 500GB 500GB 500GB 500GB datastores 300GB available 260GB available 265GB available 275GB available
22
Storage DRS Operations – IO Thresholds
When using EMC FAST VP, use SDRS, but disable I/O metric here. This combination gives you the simplicity benefits of SDRS for but adds: Economic and performance benefits of automated tiering across SSD, FC, SAS, SATA 10x (VNX) and 100x (VMAX) higher granularity (sub VMDK) SDRS triggers action on either capacity and/or latency Capacity stats are constantly gathered by vCenter, default threshold 80%. I/O load trend is evaluated (default) every 8 hours based on a past day history, default threshold 15ms. Storage DRS will do a cost / benefit analysis! For latency Storage DRS leverages Storage I/O Control functionality. The first block shows the thresholds that need to be configured on which Storage DRS will be triggered. 80% for utilized space and 15miliseconds for I/O latency. With that meaning that the Storage DRS algorithm will be invoked when these thresholds are exceeded. Now in the case of “utilized space” this happens when vCenter collects the datastore statistics and notices the threshold has been exceeded in the case of I/O load balancing this is slightly different and that is what the second block shows. Every 8 hours, currently, Storage DRS will evaluate the I/O imbalance and will make recommendations if and when the thresholds are exceeded. Note that these recommendations will only be made when the difference between the source and destination is at least 5% and the cost / risk / benefit analysis has a positive result.
23
General Best Practices
24
Effects of Partition Misalignment
Symmetrix VMAX uses 64K track size In an Aligned System, the 64 KB write would be serviced by a single drive File-system misalignment affects performance in two ways: Misalignment causes disk crossings: I/O broken across two drives Misalignment causes stripe crossings: I/O broken across stripe elements Even if disk operations are buffered by cache, there is performance impact Larger I/O sizes are most affected. For example, assuming the Symmetrix stripe element size of 64 KB was used, all I/O of 64 KB would cause disk crossings. For I/O smaller than the stripe element size, the percentage of I/O that causes a disk crossing can be computed with this equation: Percent of data crossing = (I/O size) / (Stripe Element Size) File-system misalignment affects performance in several ways: Misalignment causes disk crossings: an I/O broken across two drives (where normally one would service the I/O). Misalignment causes stripe crossings: I/O broken across stripe elements. Misalignment makes it hard to stripe-align large uncached writes. Even if the disk operations are buffered by cache, the effect can be detrimental, as misalignment will slow flushing from cache. Copyright © 2009 EMC Corporation. All Rights Reserved.
25
Storage I/O Control (SIOC)
If a customer is licensed for it, we also strongly recommend enabling storage I/O control, or SIOC. Under a default configuration with SIOC disabled, important virtual machines, such as this online store or Microsoft Exchange example, are given the same access to storage performance resources as a relatively unimportant data mining virtual machine. As a result, this data mining virtual machine might be able to completely monopolize all access to the data store, thus depriving the Microsoft Exchange or online store of the access they need to run the business. Storage I/O control prevents this scenario by enabling virtual machines to be given priority access to a data store, using a traditional shares mechanism, thus enabling the online store and Microsoft Exchange to get the performance they need and depriving the less important virtual machine, the data mining VM in this case, from monopolizing all of the storage.
26
Storage I/O Control (SIOC)
Launched with Vsphere 4.1 for FC and ISCSI datastores Updated in Vsphere 5 to include NFS mounted datastores Raw Device Mapping(RDM) is not supported Storage I/O Control does not support datastores with multiple extents Datastores must be managed by a single vCenter server Datastore-wide distributed disk scheduler that uses I/O shares per virtual machine. Assign fewer I/O queue slots to virtual machines with lower shares and more I/O queue slots to virtual machines with higher shares. Storage I/O control was introduced with vSphere 4.1 for blocked data stores. In vSphere 5.0, it was updated to include NFS data stores. As a result, every data store option available within VMware today supports storage I/O control. Note that raw device mappings are not supported because they do not go through the standard VMFS mapping functions. Storage I/O control does not support data stores with multiple extents, and they must be managed by a single vCenter server. The data store _____ distributed disk schedule uses I/O shares per virtual machine to identify which virtual machines should win under a contention scenario. Storage I/O control functions by assigning fewer IOQ slots to virtual machines with lower shares and more Q slots to virtual machines with higher shares.
27
Storage I/O Control (SIOC)
How to configure? Enable Storage I/O Control on the Datastore Right click -> properties - > check enabled Storage I/O control is configured on a per data store basis and is relatively simple to enable. Simply right-click on the data store, choose its properties and check enabled. By default, all virtual machines are given equal access to the storage, preventing the most severe noisy neighbor scenarios.
28
Storage I/O Control (SIOC)
How to configure? Advanced shows default value of 30ms SIOC will not perform any allocation until this threshold is reached Check the VMAX and VNX Techbooks for thresholds EMC recommends leaving at the default unless otherwise directed Storage I/O control only enables in scenarios where there is disk contention. How does it identify disk contention? Well, it assumes that any scenario under which the average I/O response time is greater than 30 milliseconds is a contention scenario. As a result, SIOC will not perform any allocation until this threshold is reached. You should check the VMAX and VNX tech books to determine the recommended threshold value for a given type of disk and array. VNX Table
29
General Configuration/Recomendations
Use SIOC Almost never causes an issue, can help large environments, especially mixed prod/test Can be impacted by pools (external workload) VMs / Datastore? No specific recommendation Based on IO profiles Because SIOC does not get involved in scenarios where there is no contention, it’s almost always the right thing to enable it because it can prevent issues in large environments, especially mixed production and test environments. I should note that SIOC can be impacted by the use of storage pooling, like on VMAX or VNX, because it can detect when an external workload is present that it’s unable to control, such as when disks are shared by multiple vCenter servers or multiple workloads that are outside of the virtualization environment. In these cases, I’d recommend working with your local V specialist to determine if SIOC is the right choice for your customer. 37
30
Best Practices VNX Block
31
Storage Configuration
Use Pools Excellent Performance Maximum Flexibility Maximum Options for Data Services (FAST, Dedupe, Snapshots, etc) Avoid mixing drastically different IO FAST Cache is ALUA / Initiator Mode 4 We strongly recommend using pools for most VMware environments. They have excellent performance, especially on most recent code revisions. They provide maximum flexibility, and they provide maximum options for data services, such as FAS, dedupe, snapshots, etc. We do suggest, like most environments, to avoid mixing drastically different I/O on the same spindles, and as a result, you may need to create one, two or even three or four pools for your customer’s environment. Fast Cache is extremely effective in virtualization environments and is almost always a good choice to enable. Lastly, we recommend ensuring that a ALUA or Initiator Mode 4 is selected for all initiators that correspond to ESX hosts, as this will enable the proper functionality for the asynchronous logical unit access that ESX expects and needs for best performance. Note that this is not always the default option that’s chosen when the initiator auto registers. 37
32
Networking Port Speed 10GbE or 8GbE is a must for high bandwidth requirements Prefer to use 4 ports between 2 SP for any given host Use PowerPath/VE NMP / RR if not PowerPath ALUA From a connectivity perspective, we strongly recommend that you use 10Gb or 8Gb connectivity for the hosts, as well as the array, for high bandwidth requirements. We prefer to use four ports between the two service processors for any given host, leading to four total paths. This provides the best scenario for failover recovery and the best scenario for port and CPU utilization on the array. Also, as mentioned before, PowerPath/VE can provide significant benefits, both in terms of response time, as well as bandwidth, and if you don’t use PowerPath/VE, we strongly recommend that you use the native multipather round robin settings, as detailed in the VNX tech book.
33
Best Practices VNX / VGx File
34
Review Transactional NAS Best Practices
Host settings Optimal NFS Client Transfer Size Use direct I/O Data Mover Settings Optimal NFS Server Transfer Size Direct Writes Mount Option (Direct I/O) Robust 10 Gb Network Volume Layout Thick LUNs MVM for Single Mount Solutions Fast VP & Fast Cache Flash capacity supports active dataset Maximize Flash in FAST Cache, before a Flash Tier Use Thin Enabled File Systems for Multi-Tier pools Most of the recommendations for a file-based scenario mirror those of a block-based scenario, and they also mirror those of a transactional NAS best practices. Specifically, we want to focus on the optimal NFS client transfer size and the direct I/O settings, although we’ll also continue to look at some of the other options.
35
Choosing the Optimal NFS Transfer Size
Increases default transfer size to 128KB In VNX OE File 7.0, the max transfer size was 32 KB Since VNX OE File 7.1 now defaults to 128 KB, with max of 1 MiB VMware ESXi 5.1 uses 128 KB Set to size best appropriate to the application For example; if application sends 256 KB I/Os, then set to 256 KB – if you don’t know – leave default Enable ‘Direct Writes’ As of vSphere 5.1 and VNXOE 7.1, both VMware and VNX default to a 128KB NFS transfer size, which is a reasonably good option to choose. We recommend you set to the size best appropriate to the application. For example, if the application sends 256K I/Os, then setting 256K would be a reasonable choice. If you don’t know, leave it at the default, as it seems to work well for the vast majority of environments. We also strongly recommend enabling the direct write option, also known as the uncached write option. This can lead to very significant – I’ve seen 30 and even 40 percent performance gains in environments that are very focused on NFS transactional performance. 37
36
Networking Port Speed 10GbE is a must for high bandwidth requirements
Greater than 100 MB/s to the NAS client 1GbE is fine for many general file sharing environments (home shares) Use Jumbo Frames end-to-end Link Aggregation Use LACP (802.3ad) for port redundancy and load balancing Use Fail-Safe Networking for high availability switch redundancy Again, for file, we recommend 10Gb Ethernet wherever possible, and better than 100Mbps to the NAS client. 1Gb Ethernet is fine for many general file sharing environments, like home directories under a VDI environment, for example. Jumbo frames can provide a small performance benefit, generally on the order of 5 to 12 percent, but should not be considered as a major performance improver. Using LACP or other link aggregation methodologies can significantly improve performance, as well as failover redundancy, for these file environments, and we strongly recommend its use or the use of failsafe networking for high availability switch redundancy.
37
Volume Layout Automatic Volume Manager (AVM) maximizes system capacity while improving performance Provides “good” performance Focus on ease-of-use Supports additional considerations besides performance such as high availability AVM may not be the best option for performance Where workload characteristics are not typical for NAS file sharing High concurrency from a single NFS stream to a single NFS export The automatic volume manager generally maximizes the system capacity, while improving performance. It provides “good” performance and focuses on ease of use. It’s useful for the majority of environments. AVM may not be the best option for performance, however, especially where workload characteristics are not typical for NAS file sharing, OLTP databases, for example, or when high concurrency from a single NFS stream to a single NFS export is necessary. In those cases, manual volume management may be the right choice.
38
Flash with File Considerations
Use Flash for FAST Cache before using Flash in a FAST VP tier FAST Cache granularity is more beneficial for File environments The file system distribution policy requires more pool capacity to capture hot spots that is typically more dispersed than non-file block access When using Flash as the highest tier, the tier capacity should be large enough to contain the active dataset Use Thin Enabled file systems to minimize consumption of highest tier When it comes to using flash with file, we generally recommend that you use it for fast cache before using it for a fast VP tier because the fast cache granularity is more beneficial for file environments. When using flash as the highest tier, the tier capacity should be large enough to contain the active dataset. We also recommend using thin- enabled file system to minimize consumption of that highest tier and maximize its value.
39
Best Practices VMAX
40
VMAX/VMware Best Practice
Thin Pools : Use them! Configure only as may pools as are required One pool per drive technology and drive size is perfectly acceptable Don't bind thin devices to an EFD pool if it will be added to a FAST VP Tier Rebalance after adding data devices So our recommendations are to use Thin Pools – please, use Thin Pools. Configure only as many pools as are required. It’s perfectly acceptable to have one pool per drive technology and drive size, while also being aware of the data availability requirements of the customer. We recommend that you don’t bind thin devices to an EFD pool if they’ll be added to the Fast VP tier. It’s probably better to bind them to a middle level tier, such as SAS or FC devices, or potentially to a lower-level NL SAS or SATA tier, if needed. We also recommend that you engage in rebalance after adding data devices to a pool. Note that this may take quite a long time to complete, but is generally worth it for the customer, as it significantly can reduce hot spots.
41
VMAX/VMware Best Practice
Thin Devices (TDEVs) To achieve the best performance results, thin devices should be provisioned - mapped and masked – down 4+ front-end ports Make use of striped thin metavolumes when large LUNs are required (max size 240 GB) Use striped-thin metavolumes Pre-allocate thin devices for hosts that cannot tolerate ~1 ms response time overhead for allocating new extents When using pools, you generally would have to use thin devices or TDEVs, of course, and to achieve the best performance results, thin devices should be provisioned, mapped and masked down at least four front and CPUs or ports. We recommend that you make use of striped thin metavolumes when large LUNs are required because the maximum size of a TDEV is 240Gb. In nearly all cases, striped metavolumes are the right choice over concatenated metavolumes. They have significantly higher performance and no notable downside on the latest versions of microcode. We recommend that you pre-allocate thin devices for hosts that cannot tolerate approximately 1ms response time overhead when allocating new extents.
42
Some New Hotness Near Futures
This completes our discussion of best practices. Let’s take a look at some near futures that are coming within the VMware and EMC Storage integration portfolio to help our customers. Near Futures
43
Project Mercury Software Defined Data Protection
RecoverPoint 4.0, Virtual (Software) Appliances introduced Mercury takes this further by taking this capability and embedding the RP splitter technology directly into vSphere Individual VM Protection VM can be on any vSphere Supported storage platform Copies I/O for replication, captures vSCSI commands Supports RDMs, VMDKs, etc Chad’s Demo: Project Mercury is an excellent example of our work around moving the description and use of replication and other technologies to the VM, rather than one level. Recover Point 4.0 introduced software or virtual Recover Point appliances, and Project Mercury takes this one step further by imbedding the Recover Point splitter directly into vSphere, at the VM layer. This enables us to have per-VM protection on a per-VM basis. As a result, the VM can be on any vSphere-supported storage platform, even non-EMC platforms. As a result, we can copy I/O for replication by capturing vSCSI commands, and we can fully support things like RDMs, VMDKs, etc. If you wanna see a cool demo of this, check out Chad’s demo down at YouTube on the link below.
44
vVols Make decisions on a per-VM basis
VMware virtual volumes, or vvols, are another upcoming technology that makes things really interesting for storage administrators. Traditionally, administrators have had to create LUNs, and then put those LUNs, potentially, into a storage DRS group for use by multiple VMs. But really, we don’t want to make decisions on a per-LUN basis. We want to make decisions on a per-virtual machine basis. Vvols enables administrators to simply take parts of a pool of storage and allocate those directly to given VMs, thus enabling choices around individual VM replication, snapshots, caching, encryption, deduplication, and all of this is fed with information from VASA to enable VMs to properly use specific kinds of storage and replication options. You’ll see more of this in upcoming months, as EMC World and VM World approach.
45
vVPLEX & vVNX Given all of the options that virtualization, including Hypervisor, and even storage virtualization brings to the table, there’s certainly a lot of value in seeing if we can virtualize even some of our storage platforms. In fact, we’ve gone down that road with VvPlex and VVNX.
46
vVPLEX & vVNX VPLEX and VNX as productized virtual machines
Cool Use Cases: VNX MirrorView/A replication targets on vCHS VPLEX ‘Metro’-Cluster to vCHS (or other cloud) Test/Dev/QA Think of these as productized versions of the vPlex and VNX simulators that are already available to customers on Powerlink. There’s some really interesting and cool use cases. Imagine VNX mirror view replication targets existing on the VMware cloud hybrid service. We’ve already demonstrated this today – months ago, in fact. Imagine having a vPlex metro cluster between your own site and the VMware cloud hybrid service or another cloud. Imagine being able to vMotion between these two clouds live. Again, this is something that we’ve already demonstrated and plan to sell this year. Lastly, the ability to use these for test dev and QA purposes will prove extremely valuable for some customers.
47
Laboratory vLab VNX with VMware Integration Unisphere Overview
Lastly, the EMC vLabs has an excellent interactive VNX with VMware integration vLab that you should take to familiarize yourself with many of these integration aspects. Strongly recommend you take a look at it, give it a shot. It’s a pretty quick lab, and customers tend to like it a lot, too.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.