Download presentation
Presentation is loading. Please wait.
Published byAudra Rose Modified over 8 years ago
1
Performance tests of storage arrays Irina Makhlyueva ALICE DAQ group 20 September 2004
2
I.Makhlyueva - ALICE DAQ Meeting2 20 September 2004 Contents Goals Tested storage array types RAID performance tests Early tests Tests at CASPUR Storage Lab Performance tests with Infortrend and dotHill RAIDs Summary, final remarks
3
I.Makhlyueva - ALICE DAQ Meeting3 20 September 2004 Goals RAID systems in ALICE DAQ: temporary data buffer in case of CDR failures main file storage in the Reference DAQ system We have tested: Different brands and models of RAIDs Performance of different file systems Influence of RAID parameters Performance for simultaneous multiple i/o operations
4
I.Makhlyueva - ALICE DAQ Meeting4 20 September 2004 Tested storage arrays Infortrend IFT-6330 12 IDE drive slots 128 MB cache Dual Gbps fiber host channel Infortrend EonStor A16F-G1A2 (CASPUR SLAB) 16 SATA drive slots Up to 1GB cache Dual 2Gbps fiber host channel dotHILL SANnet II 200 FC 12 fiber channel disk slots (expandable) 1 GB cache 2 Gbps fiber host channel
5
I.Makhlyueva - ALICE DAQ Meeting5 20 September 2004 Performance tests The simplest test: filling an empty RAID with fixed-size files, measure the average transfer rate (~4 h per test). % of full disk 2040 60 08060 80 MB/s 80 60 801006020400 60 80 MB/s % of full disk Large fluctuationsLack of repeatablity from test to test Sudden “jumps” within one test Infortrend IFT-6330 (1.1 TB) with ext3 fs Standard CERN Linux PC (RH 7.3) Unexpected behaviour was observed
6
I.Makhlyueva - ALICE DAQ Meeting6 20 September 2004 Tests at CASPUR Storage Lab The problem was investigated in collaboration with CERN IT and CASPUR (Rome), using storage arrays (of different brands) of the CASPUR Slab. The main focus was on (see Ref. [1]): filesystem type and its parameters fs mount options kernel tuning RAID parameter tuning (stripe size) Main results: transfer rate “jumps”, reproducibility problems: cured by kernel and ext3 tuning xfs gives a substantially better performance than ext3 bugs in Infortrend RAID firmware: acknowledged by the firm
7
I.Makhlyueva - ALICE DAQ Meeting7 20 September 2004 32K 128k 256K EXT3 results – filling 1.7 TB with 8 GB files Much smaller fluctuations A firmware problem became visible: a dependence on RAID-5 stripe size Infortrend SATA results
8
I.Makhlyueva - ALICE DAQ Meeting8 20 September 2004 filesystem type dependence XFS higher performance better stability less sensitive to firmware flaws Infortrend SATA results
9
I.Makhlyueva - ALICE DAQ Meeting9 20 September 2004 Further tests at ALICE DAQ lab Most up-to-date version of XFS (special Linux RedHat 9 installation) Two RAID systems mounted simultaneously, via a Brokade FC switch: Infortrend IFT-6330 High-end dotHILL SANnet II 200 FC Dependence of the file size and record length Concurrent i/o: single or multiple “writer” or/and “reader” process(es) “Pile-up” (random reading) tests % of full disk MB/s IFT, write IFT, read DH, write DH, read 2GB, 8MB
10
I.Makhlyueva - ALICE DAQ Meeting10 20 September 2004 Dependence on the file size and record length 2 GB, write dothill 1 dothill 2 dothill 1 2 GB, read IFT 1 IFT 2 2 GB, write IFT 2 IFT 1 2 GB, read for both RAID systems, the i/o rates were measured for filesizes of 100, 300, 1024 and 2048 MB and recl=8 kB, 32 kB, 128 kB, 512 kB, 2 MB, 8 MB, 32 MB and 128 MB Example (fsize=2 GB):
11
I.Makhlyueva - ALICE DAQ Meeting11 20 September 2004 Concurrent i/o “diskperf” benchmark (fsize=2 GB, recl=8MB) Aggregate writing transfer rate: extra writer(s): no (dotHill) or weak (IFT) effect extra reader(s): very strong effect in case of IFT (firmware?) a peculiar feature: reading speed is strongly suppressed in presence of writing process(es) – both dotHill and IFT writer(s) only writers + reader(s)
12
I.Makhlyueva - ALICE DAQ Meeting12 20 September 2004 Concurrent i/o “diskperf” benchmark (fsize=2 GB, recl=8MB) Aggregate writing transfer rate: extra writer(s): no (dotHill) or weak (IFT) effect extra reader(s): very strong effect in case of IFT (firmware?) a peculiar feature: reading speed is strongly suppressed in presence of writing process(es) – both dotHill and IFT writer(s) only writers + reader(s)
13
I.Makhlyueva - ALICE DAQ Meeting13 20 September 2004 Observation: with concurrent processes, the writer(s) dominate over the readers
14
I.Makhlyueva - ALICE DAQ Meeting14 20 September 2004 Random reading (“pile-up”) test One of standard storage performance benchmarks used at CERN is a “pile- up test” by R.Töbicke: a multiple-thread random access reading over an arbitrary number of 2 GB files, prepared in advance. Our results: the aggregate reading speed is only a small fraction of the maximal reading speed for serial access it depends on the number of files used
15
I.Makhlyueva - ALICE DAQ Meeting15 20 September 2004 Summary a stable RAID performance can be achieved by a careful tuning of the Linux kernel and the file system parameters XFS is better than EXT3 IFT RAID: aggregate writing speed is degraded by a presence of concurrent reading processes (a firmware effect?) for both tested RAID systems the reading speed is sharply suppressed in presence of concurrent writing process(es) dotHill system features a superior performance but is extremely expensive. Further tests may include: repeat the tests on the IFT system after upgrading the firmware try different benchmarks (lmdd, iozone, …) study the performance in a multi-host environment
16
I.Makhlyueva - ALICE DAQ Meeting16 20 September 2004 References Tests at CASPUR Slab: http://afs.caspur.it/slab2004a/setuplayout.html http://afs.caspur.it/slab2004a/setuplayout.html A.Maslennikov, New results from CASPUR Storage Lab, http://hepwww.rl.ac.uk/hepix/nesc/maslennikov2.ppt Thanks for help and discussions are to: CASPUR: A.Maslennikov CERN IT: A.Horvath, J.Iven, P.Kelemen, R.Többicke ALICE DAQ group: K.Schossmaier, P. Van De Vyver
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.