Download presentation
Presentation is loading. Please wait.
Published byEsmond Howard Modified over 9 years ago
1
Managing storage requirements in VMware Environments October 2009
2
Best Practices for Configuring Virtual Storage 1.Optimal I/O performance first 2.Storage capacity second 3.Base your storage choices on your I/O workload 4.Pooling storage resources can lead to contention 1 http://www.vmware.com/technology/virtual-storage/best-practices.html
3
#1 Optimal I/O Performance First 2
4
3 Server Virtualization = Legacy Storage Pain Points Application sprawl Dynamic performance allocation Capacity over-allocation Page Sharing Ballooning Swapping Caching
5
4 IBM DS5300 Sep 2008 transaction-intensive applications typically demand response time < 10 ms SPC-1 IOPS™ Response Time (ms) IBM DS8300 Turbo Dec 2006 HDS USP V Oct 2007 EMC CLARiiON CX3-40 Jan 2008 NetApp FAS3170 Jun 2008 3PAR InServ T800 Sep 2008 Scalable Performance: SPC-1 Comparison Mid Range High End Utility HDS AMS 2500 Mar 2009 3PAR InServ F400 May 2009
6
#2 Storage Capacity Second 5
7
Get Thin & Stay Thin Average 60% Capacity Savings with 3PAR Thin Provisioning Additional 10% Savings with 3PAR Thin Persistence Legacy volume with poor utilization rate 6
8
7 Storage Array VMware VMFS Volume/Datastore VMware and 3PAR Thin Provisioning Options Thin Virtual Disks (VMDKs) 10GB 100GB 30GB 150GB Volume Provisioned at Storage Array Virtual Machines (VMs) 40 GB Over provisioned VMs:250 GB Physically Allocated:200 GB40 GB Capacity Savings:50GB210 GB 3PAR Array 10GB 100GB 30GB 150GB 10GB 100GB 30GB 150GB 10GB 100GB 30GB 150GB 200 GB 200GB Thick LUN 200GB Thin LUN Thin VM on Thick StorageThin VM on Thin Storage
9
8 #3 Base storage choices on I/O workload
10
9 Host Connectivity Data Cache Disk Connectivity Legend Traditional Modular Architecture 3PAR InSpire® Architecture a finely, massively, and automatically load balanced cluster #3 Base your storage choices on your I/O workload
11
10 3PAR Dynamic Optimization = Balanced Disk Layout Data Layout After HW Upgrades Data Layout After Non-Disruptive Rebalance Drives % Used 100%150 IOPS/ drive 100s 0 0 % 0 Drives 0 % % Used 100% 150 IOPS / drive 100s 0 0 VMFS workloads rebalanced non- disruptively after capacity upgrades Better drive utilization Better performance
12
11 Dynamic Optimization: No complexity or disruption Performance Cost per Useable TB Fibre Channel Drives Nearline Drives Transition non- disruptively and with one command from one configuration to another until performance and storage efficiency appropriately balanced RAID 1 RAID 5 (2+1) RAID 5 (3+1) RAID 5 (8+1) RAID 1 RAID 5 (2+1) RAID 5 (3+1) RAID 5 (8+1) No migrations ! Smart
13
12 #4 Pooling storage resources can lead to contention
14
13 Unified Processor and/or Memory Control Processor & Memory 3PAR ASIC & Memory disk Heavy throughput workload applied Heavy transaction workload applied I/O Processing : Traditional Storage I/O Processing : 3PAR Controller Node hosts small IOPs wait for large IOPs to be processed control information and data are pathed and processed separately Heavy throughput workload sustained Heavy transaction workload sustained Disk interface = control information (metadata) = data Host interface disk Disk interface #4 Pooling storage resources can lead to contention
15
14 Oracle, and now VMware, clusters create new storage management challenges 1 cluster of 20 hosts and 10 volumes, once set up, requires 200 provisioning actions on most arrays! –This can take a day to complete –Error-prone VMware clusters, in particular, are dynamic resources subject to a lot of growth and change 14 Server Clusters: A Whole New Set of Storage Issues Vol2Vol1Vol3Vol4 Vol6Vol5Vol7Vol8 Vol10Vol9Vol11Vol12 Vol14Vol13Vol15Vol6 200 Provisioning Actions
16
15 Autonomic Storage is the Answer 3PAR Autonomic Groups –Simplifies and automates volume provisioning in a single command Exports a group of volumes to a host or cluster of hosts –Automatically preserve same LUN ID for each volume across hosts Autonomic Host Groups: When a new host is added into the host group: –All volumes are autonomically exported to the new host –When a host is deleted, it will autonomically delete exported volumes Autonomic Volume Groups: When a new volume is added into the volume group: –New volume is autonomically exported to all hosts in the host group –Volume deletions are applied to all hosts autonomically Single-command to export LUNs Storage Volume Group Cluster Host Group 3PAR Autonomic Groups Vol2Vol1Vol3Vol4 Vol6Vol5Vol7Vol8 Vol10Vol9Vol11Vol12 Vol14Vol13Vol15Vol6
17
16 ChallengeSolution Optimal I/O performance firstSize and plan at aggregate demand Storage capacity secondControl costs with Thin Technologies Base your storage choices on your I/O workload Build a platform that scales with VM workload Pooling storage resources can lead to contention Rely on technology not tuning
18
Thank you Serving Information ®. Simply.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.