Presentation is loading. Please wait.

Presentation is loading. Please wait.

Page 1 Mass Storage 성능 분석 2002. 1. 17 강사 : 이 경근 대리 HPCS/SDO/MC.

Similar presentations


Presentation on theme: "Page 1 Mass Storage 성능 분석 2002. 1. 17 강사 : 이 경근 대리 HPCS/SDO/MC."— Presentation transcript:

1 Page 1 Mass Storage 성능 분석 2002. 1. 17 강사 : 이 경근 대리 HPCS/SDO/MC

2 Page 2 Agenda Mass Storage 종류 XP Storage 의 내부 구조. XP Storage 의 IO LIMIT FC High Performance Mode IO BALANCE Striping Factor RAID (1 or 5 or 0/1 )? ORACLE (OLTP) On XP SCSI QUEUE DEPTH IO Performance Measurement Tool Disk IO Performance Check Point Q & A

3 Mass Storage Performanc e Page 3 Mass Storage 종류 XP Storage XP512, XP48, XP256 Virtual Array VA 7100, VA7400,VA7405 EMC Symmetrix HITACHI, SHARK

4 Mass Storage Performanc e Page 4 XP Storage 의 내부 구조 – XP512 Also ESCON Up to 32 FibreChannel ports (8 boards of 4 ports each) FibreChannel Up to 32 disks per FibreChannel pair (128 per ACP pair) 32 disk-side FibreChannels (total 512 high speed disks) (total 32*100MB/s back end) (8 boards of 4 ports/processors each) Disk choices: 18, 47 10Krpm Crossbar...... 8GB Crossbar CONTROL 64 ports @50MB/s =3.2GB/s ACP P P P P P P P P DATA 16 ports @200MB/s =3.2GB/s P P P P FibreChannel P P P P CHIPCHIP P P P P CHIPCHIP CHIP Shared Memory - 512MB -1.28GB (4 ports to each CHIP and ACP board)

5 Mass Storage Performanc e Page 5 XP STORAGE 의 내부 구조 – XP256

6 Mass Storage Performanc e Page 6 XP STORAGE 의 IO LIMIT – XP512 CHIP Pair Limits: 41,000 IO/sec 464 MB/sec Front-end Performance: 100% Cache Hits 165,000 IO/sec 1,560 MB/sec XP512 Crossbar Bandwidth: 6.4GB/sec ACP Pair Limits: 7,300 IO/sec 318 MB/sec Single FC PORT: Standard Mode: 10,500 IO/sec 90 MB/sec High Performance Mode: 20,000 IO/sec Back-end Performance: Cache Avoidance 31,000 IO/sec 840 MB/sec 1 Array Group: 450 IO/sec 65 MB/sec

7 Mass Storage Performanc e Page 7 XP STORAGE 의 IO LIMIT – XP256 SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s ACPACP ACPACP 80 SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s ACPACP ACPACP 80 SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s ACPACP ACPACP 80 SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s SCSI CTL SCSI CTL SCSI CTL SCSI CTL i960 625IO/s ACPACP ACPACP 80 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 FC i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 i960 even 1500 i960 odd 1500 16 Fibre Channel (each can carry 10,000 IO/s and 90 MB/s max) 32 CHIP i960 processors (can handle ~1500 IO/s each, a front end limiter) Data Buses (240MB/s * 2) limit sequential throughput to170MB/s from disk to port Control Buses used to access Shared Memory -- if congested, all i960s will slow down 16 ACP i960 processors (can handle ~625 IO/s each, a back end limiter) 232 disks (handle ~80 IO/s each, derated for RAID, are 2-3x ACP limits) FC array domains CHIPs ACPs

8 Mass Storage Performanc e Page 8 FC High Performance Mode

9 Mass Storage Performanc e Page 9 IO BALANCE (Efficient Striping) Efficient host striping distributes the workload over the host channels connected to the CHIP ports. In HP-UX and AIX, striping can balance the workload across HBAs. A workload distribution mechanism also exists on Solaris through Veritas Volume Manager and DPM, and on NT through AutoPath. Multiple LUNs can be striped across for increased bandwidth between the host and the array. In Case of EMC, Using a Power Path

10 Mass Storage Performanc e Page 10 Striping Factor (Striping and Block Size) Striping is a means to get the data to and from XP cache as efficiently as possible and let the XP handle the backend. The XP256 uses a 16K cache block per operation. Any stripe size less than 16K will be wasteful of cache. From 2K to 32K for EMC Use 128K, 256K or 1MEG blocks for applications that predominantly perform large sequential operations (Data Warehousing) and 64K blocks if the application has more random operations (OLTP).

11 Mass Storage Performanc e Page 11 Striping Factor ( Example ) Used a “round the clock” method of striping Built a LVM Volume Group taking LDEVs from ACP regions 1, then 2, then 3, then 4, and back to 1 and so on Goal was to spread the I/Os not knowing where the hot spots in the application would be

12 Mass Storage Performanc e Page 12 RAID 5 or RAID 1 ? RAID-1 and RAID-5 each have advantages depending on the circumstances Single random read cache miss, considered in isolation RAID-1 has 2 locations on 2 disks with 4 paths to ACP pair RAID-5 has 1 location on 1 disk with 2 paths to ACP pair With more paths and more disk choices to read the record, RAID-1 has less interference from other I/Os and less delay At moderate I/O rates, where the XP256 cache destage limit is not exceeded, all random writes are cache hits and have the same response time, regardless of RAID level

13 Mass Storage Performanc e Page 13 RAID 5 or RAID 1 ? ( 계속 ) RAID-5 2357, RAID-1 2255 transactions per minute 3 most important Oracle I/Os: datafile reads RAID-5 18% faster RAID-1 35% faster RAID-5 4% faster redo log file write time per transaction datafile writes

14 Mass Storage Performanc e Page 14 RAID 5 or RAID 1 ? ( 계속 ) Why Raid 5 ? Striped data and parity reside in the same stripe and writes to all disks in the array group Read performance is good striping distributes data across multiple disk spindles Write performance is not so good stripe is read, blocks updated, parity and data written known as RAID-5 write penalty, read-modify-write

15 Mass Storage Performanc e Page 15 Raid 1/0 – Best Performance RAID 0/1 —Striped and mirrored (aka. “dual read” RAID 1) —50% storage overhead —Best for performance sensitive applications —Better read & write performance

16 Mass Storage Performanc e Page 16 ORACLE On XP “Place redo logs and database files onto different drives” “ Ensure that data and indexes are on separate disk spindles ” “Spread your I/O load across as many disk devices as possible”

17 Mass Storage Performanc e Page 17 ORACLE On XP ( 계속 ) Oracle Component Data and indexes Redo logs and archive logs Roll-back segments Temp and System Oracle IO Reads (data/index) are continuous Writes (data/index) are sporadic - peaks at checkpoints Write (redo & rollback) continuous Log archiving (read from redo, write to archive) sporadic

18 Mass Storage Performanc e Page 18 ORACLE On XP ( 계속 ) LVM vol. grp. XP512 XP48

19 Mass Storage Performanc e Page 19 ORACLE On XP ( 계속 ) Place random-write-intensive objects on RAID 0/1 Data/indexes can be random Redo/rollback heavily written but sequential Divide objects evenly among VGs Keep each object wholly contained in a VG

20 Mass Storage Performanc e Page 20 SCSI QUEUE DEPTH SCSI queue depth is the number of IO operations that can be outstanding on a host to a LUN at any one time For HP-UX, the default SCSI queue depth is 8. A single port can provide access to 120 LUNs a hard limit of 256 concurrent IO operations is defined for each SCSI bus in a HP-UX system the number of concurrent IOs outstanding for each host may need to be controlled to prevent the possibility of receiving a SCSI “BUSY” # /usr/sbin/scsictl –m queue_depth=4 /dev/rdsk/cxtydz scsi_max_qdepth - Default SCSI Queue Depth

21 Mass Storage Performanc e Page 21 IO Performance Measurement Tool DISK BENCH On HP-UX Diskbench (db) is a disk subsystem performance measurement tool. It measures the performance of a disk subsystem, HBA, and driver in terms of throughput (for sequential operation) and number of I/Os (for random operation). http://hpdrdev.fc.hp.com/devresource/Tools/Diskbench/ Diskbench.htmlhttp://hpdrdev.fc.hp.com/devresource/Tools/Diskbench/ Diskbench.html Iometer ( NT ), IOZone, Iotest

22 Mass Storage Performanc e Page 22 DISK IO Performance Check Point Is Application Performing OK? Done! Run Glance/SAR On server Is CPU Util > 90% Is CPU waiting On I/O? Is ChiP Util > 90% Is CPU on Right tasks? Server too Small! Other Problems! Check Network Run XP Performance Advisor Is Cache Util > 90% Is AG Util > 90% Add Chips Spread Load Add Cache Is ACP Util > 90% Other Problems! Add Disks Spread Load Add ACPs Spread Load N Y N Y Y N Y N Y Y Y Y N N N N

23 Mass Storage Performanc e Page 23 Q & A

24 Page 24 Thanks


Download ppt "Page 1 Mass Storage 성능 분석 2002. 1. 17 강사 : 이 경근 대리 HPCS/SDO/MC."

Similar presentations


Ads by Google