Download presentation
Presentation is loading. Please wait.
Published byMarcus Anthony Modified over 9 years ago
1
© 2011 IBM Corporation Sizing Guidelines Jana Jamsek ATS Europe
2
© 2011 IBM Corporation IBM Storage Systems IBM i performance data Good to use IBM i performance data even before modelling with Disk Magic, to apply sizing guidelines Needed reports for sizing external storage –System report / Disk utilization & Storage pool utilizaton –Resource report / Disk utilization –Component report / Disk activity Desired way of collecting: –Collect performance data during 24 hours on 3 consecutive days and during heavy End-of-month job –Colleciton interval 5 minutes Insert the reports to Disk Magic to obtain data in Excell spreadsheet
3
© 2011 IBM Corporation IBM Storage Systems Example: Spreadsheet by Disk Magic
4
© 2011 IBM Corporation IBM Storage Systems Component report: present cache hits
5
© 2011 IBM Corporation IBM Storage Systems Sizing the disk drives in external storage DS8800 Recommended maximal disk utilization: 60% –15K RPM SAS disk drives –10 K RPM SAS disk drives –SSD DS5000 Recommended maximal disk utilization: 45% – 15K RPM disk drives – 10 K RPM disk drives – SSD XIV –Data modules Storwize V7000 Recommended maximal disk utilization: 45% –15K RPM SAS disk drives – 10 K RPM SAS disk drives – SSD
6
© 2011 IBM Corporation IBM Storage Systems Guidelines for RAID level RAID-10 provides better resiliency RAID-10 provides gererally better performance: –RAID-5 results in 4 disk operations per write – higher penalty –RAID-10 results in 2 disk operations per write – lower penalty RAID-10 requires more capacity In DS8000 use RAID-10 when: – There are many random writes – Write cache efficiency is low – Huge workload In Midrange storage and Storwize V7000 we recommend to use RAID-10
7
© 2011 IBM Corporation IBM Storage Systems DS8800: Number of ranks Detailed calculation for maximal IO/sec on Raid-5 rank ( reads/sec – read cache hits % ) + 4 * (writes/sec – write cache efficiency) = disk operations/sec (on rank) One 6+P 15 K rpm rank can handle max 2047 disk accesses/sec, at recommended 60% utilization: 1228 disk ops/sec Divide current disk accesses/sec by 1228 Example: 261 reads/sec, 1704 writes/sec, 45% read cache hits, 24 % write efficiency: (261-117) + 4 *(1704- 409) = 5324, 5324 / 1228 = 4 - 5 ranks Recommended: 4 ranks Calculation is based on performance measurements in Storage development and recommended % disk util.
8
© 2011 IBM Corporation IBM Storage Systems DS8800: Number of ranks - continue Estimate % read cache hit and % write cache efficiency from present cache hits on internal disk Rough estimation by best practise: –If % cache hits is below 50% estimate the same percentage on external storage – If % cache hits is above 50% estimate half of this % on external storage If cache hits are not known or you are in doubt, use Disk Magic default estimation: 20% read cache hit, 30% write cache efficiency
9
© 2011 IBM Corporation IBM Storage Systems DS8800: Number of ranks - continue Quick calculation based on shown detailed calcuation Example 9800 IO/sec with Read/Write ratio 50/50 need 9800 / 982 =app10 * RAID-10 ranks of 15 K rpm disk drives, connected with IOP- less adapters The table can be found in the Redbook IBM System Storage DS8000:Host Attachment and Interoperability, SG24-8887-00 Assumed cache hits: 20% read hit 30% write efficiency
10
© 2011 IBM Corporation IBM Storage Systems DS5000/4000/3000:Number of disk drives Detailed Calculation for maximal IO/sec on disk in Raid-10 ( reads/sec – read cache hits % ) + 2 * (writes/sec – write cache efficiency) = disk operations/sec (on disk) Quick calculation IO/sec per DDM: 70% Read50% Read 15 K RPM disk drive RAID-1 or RAID-10 8274 RAID-55845 10 K RPM disk drive RAID-1 or RAID-10 5549 RAID-53930 Example: 7000 IO/sec with read/write ratio 70 / 30 needs 7000 / 82 =app 85 * 15 K RPM disk drives in RAID-10
11
© 2011 IBM Corporation IBM Storage Systems Storwize V7000: Number of disk drives Quick calculation: 70% Read50% Read 15 K RPM disk drive RAID-1 or RAID-10 138122 RAID-5 9675 10 K RPM disk drive RAID-1 or RAID-10 9282 RAID-5 6450 Example: 7000 IO/sec with read/write ratio 70 / 30 needs 7000 / 138 =app 50 * 15 K RPM disk drives in RAID-10
12
© 2011 IBM Corporation IBM Storage Systems Number of DDMs, connected with VIOS and SVC The sizing guidelines and calculations for DDMs in Storage systems connected with VIOS or VIOS_NPIV don’t change The sizing guidelines and calculations for DDMs in Storage systems connected with SVC and VIOS don’t change
13
© 2011 IBM Corporation IBM Storage Systems Sizing for big blocksizes (transfer sizes) Big blocksize: 64 KB and above Add about 25% disk arms for big blocksizes The shown guidelines assume small blocksize (about 12 KB) Peak in IO usually experiences small blocksizes Peak in blocksizes has typically low IO/sec So usually we size for peak in IO/sec and don’t update with additional 25%
14
© 2011 IBM Corporation IBM Storage Systems Number and size of LUNs With given disk capacity: The bigger the number of LUNs the smaller the size Sizing guidelines for the number, or for the size To obtain the number of LUNs you may use WLE ( number of disk drives) Considerations for very big number of LUNs: –Many physical adapters are needed for natively connected storage –Difficult to manage and troubleshoot with big number of virtual adapters in VIOS
15
© 2011 IBM Corporation IBM Storage Systems DS8000 Guideline by best practise: – 2 * size of LUN = 1 * size of DDM, or – 4 * size of LUN = 1 * size of DDM – Presently 70 GB or 140 GB LUNs are mostly used DS5000 Guideline: –Big LUNs enable better seek time on disk drives –Small LUNs – big number of LUNs enable more concurrent IO to disk space –Compromise: 70 GB or 140 GB LUNs DS5000 best practise: –146 GB physical disks –Make RAID-1 arrays of tow physical disks and create one logical drive per RAID-1 array. –Recommended segment size 128 KB or 64 KB –Create one LUN per array – If the number of LUNs is limited to 16 (for example, connecting to IBM i on BladeCenter) you may want to make a RAID-10 array of four physical disks and create one LUN per array. Number and size of LUNs -continue
16
© 2011 IBM Corporation IBM Storage Systems Number and size of LUNs -continue XIV - best pracise for the size of LUNs: Measurements in Mainz: –CPW 96000 users, 2 concurrent runs in different LPARs –15-module XIV Gen 3 Transaction resp. Time (sec) Disk service time (ms) Latency in XIV% Cache hits in XIV 42 * 154 GB LUNs 6.64.6480 6 * 1TB LUNs19.38.2760 70 GB LUNs were not tested Recommendation: about 140GB LUNs, or 70GB LUNs
17
© 2011 IBM Corporation IBM Storage Systems Number and size of LUNs -continue Storwize V7000, SVC Presently we recommend about 140 GB LUNs This recommendation is based on best practise by other midrance storage systems Recommended to craete vdisks in Striped mode ( default ) Recommended extent size 256 MB ( default )
18
© 2011 IBM Corporation IBM Storage Systems Guidelines for different types of connection The listed guidelines for a particular storage system apply to all cases (when applicable): – Native connection Sizing for physical FC adapters applies to natively connected storage – Connectioned with VIOS vscsi – Connection with VIOS_NPIV – Connection via SVC The size of LUNs applies to SVC vdisks
19
© 2011 IBM Corporation IBM Storage Systems Sizing FC adapters in IBM i – by IO/sec #5774 or #5749#5735 IO/sec at 70% utilization 10500 per port12250 per port GB per port in adapter 28003266 Example: for one port in #5735 we roecmmend 3266 / 70 = 46 * 70 GB LUNs Example: For 2 ports in Multipath we recommend 2 * 46 = 92 -> 64 * 70 GB LUNs in Multipath Assumed: Access Density = 1.5, For 2 path: multiply the capacity by 2
20
© 2011 IBM Corporation IBM Storage Systems Throughput of IOP-less adapters #5774 or #5749#5735 Max sequential throughput 310 MB/sec per port400 MB/sec Avg of max sequential throughput for 4KB, 256KB raeds, writes 216 MB/sec per port Avg of min sequential throughput for 4KB, 256KB raeds, writes 132 MB/sec per port Max transaction workload throughput 250 MB/sec per port Transaction workload throughput at 70% utilization 175 MB/sec per port Avg of max sequential throughput for 4KB, 256KB raeds, writes 382 MB/sec per 2 ports Avg of min sequential throughput for 4KB, 256KB raeds, writes 208 MB/sec per 2 ports
21
© 2011 IBM Corporation IBM Storage Systems Adapters on HSL loops
22
© 2011 IBM Corporation IBM Storage Systems Sharing or dedicating ranks Sharing ranks among multiple IBM i LPARs – Enables better usage of resources in external storage – On the other hand the performance of an LPAR might be influenced by workloads in other LPARs Dedicating ranks to each LPAR – Enables stable performance (no influence from other systems) – Resources are not as well utilized as with shared ranks Best practise: – Dedicate ranks to big and/or important systems – Share ranks among medium and small LPARs
23
© 2011 IBM Corporation IBM Storage Systems Guidelines for cache size in external storage Modelling with Disk Magic Rough guidelines for DS8800 –10 to 20 TB capacity: 64GB cache – 20 to 50 TB: 128 GB cache – > 50 TB: 256 GB cache
24
© 2011 IBM Corporation IBM Storage Systems Number of HA cards in DS8800 Rules of thumb for HAs in DS8800 About 4 to 8 * ports in IBM i per one HA card in DS8800 For high performance: The number of HA cards should be the same or bigger than the number of device adapters in DS8800 At least one HA card per IO enclosure in DS8800
25
© 2011 IBM Corporation IBM Storage Systems Sizing for VIOS
26
© 2011 IBM Corporation IBM Storage Systems Sizing IASP Very rough guideline: about 80% of IO will be done to IASP System report – Resource utilization; IO to database
27
© 2011 IBM Corporation IBM Storage Systems Sizing for SSD
28
© 2011 IBM Corporation IBM Storage Systems Sizing for Metro Mirror or Global Mirror links
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.